00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3394 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3005 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.801 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.811 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.822 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:04.823 > git config core.sparsecheckout # timeout=10 00:00:04.832 > git read-tree -mu HEAD # timeout=10 00:00:04.848 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:04.865 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:04.865 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:04.973 [Pipeline] Start of Pipeline 00:00:04.984 [Pipeline] library 00:00:04.985 Loading library shm_lib@master 00:00:04.986 Library shm_lib@master is cached. Copying from home. 00:00:04.999 [Pipeline] node 00:00:05.006 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.007 [Pipeline] { 00:00:05.015 [Pipeline] catchError 00:00:05.017 [Pipeline] { 00:00:05.027 [Pipeline] wrap 00:00:05.034 [Pipeline] { 00:00:05.039 [Pipeline] stage 00:00:05.041 [Pipeline] { (Prologue) 00:00:05.191 [Pipeline] sh 00:00:05.469 + logger -p user.info -t JENKINS-CI 00:00:05.487 [Pipeline] echo 00:00:05.489 Node: GP11 00:00:05.497 [Pipeline] sh 00:00:05.790 [Pipeline] setCustomBuildProperty 00:00:05.798 [Pipeline] echo 00:00:05.799 Cleanup processes 00:00:05.803 [Pipeline] sh 00:00:06.079 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.079 1652898 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.089 [Pipeline] sh 00:00:06.370 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.370 ++ grep -v 'sudo pgrep' 00:00:06.370 ++ awk '{print $1}' 00:00:06.370 + sudo kill -9 00:00:06.370 + true 00:00:06.383 [Pipeline] cleanWs 00:00:06.391 [WS-CLEANUP] Deleting project workspace... 00:00:06.391 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.397 [WS-CLEANUP] done 00:00:06.401 [Pipeline] setCustomBuildProperty 00:00:06.414 [Pipeline] sh 00:00:06.694 + sudo git config --global --replace-all safe.directory '*' 00:00:06.767 [Pipeline] nodesByLabel 00:00:06.769 Could not find any nodes with 'sorcerer' label 00:00:06.774 [Pipeline] retry 00:00:06.776 [Pipeline] { 00:00:06.800 [Pipeline] checkout 00:00:06.807 The recommended git tool is: git 00:00:06.817 using credential 00000000-0000-0000-0000-000000000002 00:00:06.821 Cloning the remote Git repository 00:00:06.824 Honoring refspec on initial clone 00:00:06.828 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.829 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp # timeout=10 00:00:06.835 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.835 > git --version # timeout=10 00:00:06.838 > git --version # 'git version 2.43.0' 00:00:06.838 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.839 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.840 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:00:27.134 Avoid second fetch 00:00:27.148 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:27.257 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:27.263 [Pipeline] } 00:00:27.283 [Pipeline] // retry 00:00:27.295 [Pipeline] nodesByLabel 00:00:27.296 Could not find any nodes with 'sorcerer' label 00:00:27.302 [Pipeline] retry 00:00:27.304 [Pipeline] { 00:00:27.323 [Pipeline] checkout 00:00:27.329 The recommended git tool is: NONE 00:00:27.340 using credential 00000000-0000-0000-0000-000000000002 00:00:27.345 Cloning the remote Git repository 00:00:27.349 Honoring refspec on initial clone 00:00:27.353 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:27.353 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:00:27.359 Using reference repository: /var/ci_repos/spdk_multi 00:00:27.359 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:27.359 > git --version # timeout=10 00:00:27.362 > git --version # 'git version 2.43.0' 00:00:27.362 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:27.363 Setting http proxy: proxy-dmz.intel.com:911 00:00:27.363 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/master +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:27.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:27.125 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:27.138 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:27.145 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:27.152 > git config core.sparsecheckout # timeout=10 00:00:27.155 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:01:09.958 Avoid second fetch 00:01:09.972 Checking out Revision 3f2c8979187809f9b3b0766ead4b91dc70fd73c6 (FETCH_HEAD) 00:01:10.209 Commit message: "event: switch reactors to poll mode before stopping" 00:01:10.216 First time build. Skipping changelog. 00:01:09.941 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:01:09.945 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:01:09.948 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:09.962 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:09.969 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:09.976 > git config core.sparsecheckout # timeout=10 00:01:09.979 > git checkout -f 3f2c8979187809f9b3b0766ead4b91dc70fd73c6 # timeout=10 00:01:10.213 > git rev-list --no-walk 035bc63a4689085c93519be4cd2f72387a5b8c7e # timeout=10 00:01:10.223 > git remote # timeout=10 00:01:10.226 > git submodule init # timeout=10 00:01:10.272 > git submodule sync # timeout=10 00:01:10.315 > git config --get remote.origin.url # timeout=10 00:01:10.322 > git submodule init # timeout=10 00:01:10.367 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:01:10.370 > git config --get submodule.dpdk.url # timeout=10 00:01:10.373 > git remote # timeout=10 00:01:10.376 > git config --get remote.origin.url # timeout=10 00:01:10.379 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:01:10.382 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:01:10.385 > git remote # timeout=10 00:01:10.388 > git config --get remote.origin.url # timeout=10 00:01:10.390 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:01:10.393 > git config --get submodule.isa-l.url # timeout=10 00:01:10.396 > git remote # timeout=10 00:01:10.399 > git config --get remote.origin.url # timeout=10 00:01:10.401 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:01:10.404 > git config --get submodule.ocf.url # timeout=10 00:01:10.406 > git remote # timeout=10 00:01:10.409 > git config --get remote.origin.url # timeout=10 00:01:10.412 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:01:10.416 > git config --get submodule.libvfio-user.url # timeout=10 00:01:10.418 > git remote # timeout=10 00:01:10.421 > git config --get remote.origin.url # timeout=10 00:01:10.423 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:01:10.425 > git config --get submodule.xnvme.url # timeout=10 00:01:10.428 > git remote # timeout=10 00:01:10.430 > git config --get remote.origin.url # timeout=10 00:01:10.433 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:01:10.436 > git config --get submodule.isa-l-crypto.url # timeout=10 00:01:10.438 > git remote # timeout=10 00:01:10.441 > git config --get remote.origin.url # timeout=10 00:01:10.443 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.447 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 Setting http proxy: proxy-dmz.intel.com:911 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:01:10.448 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:01:21.805 [Pipeline] } 00:01:21.822 [Pipeline] // retry 00:01:21.829 [Pipeline] sh 00:01:22.110 + git -C spdk log --oneline -n5 00:01:22.110 3f2c8979187 event: switch reactors to poll mode before stopping 00:01:22.110 443e1ea3147 setup.sh: emit command line to /dev/kmsg on Linux 00:01:22.110 a1264177cd2 pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:01:22.110 af95268b18e pkgdep/git: Adjust QAT driver to kernel >= 6.8.x 00:01:22.110 5e75b9137ab scripts/pkgdep: Simplify mdl installation 00:01:22.129 [Pipeline] withCredentials 00:01:22.140 > git --version # timeout=10 00:01:22.152 > git --version # 'git version 2.39.2' 00:01:22.170 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:22.173 [Pipeline] { 00:01:22.183 [Pipeline] retry 00:01:22.185 [Pipeline] { 00:01:22.205 [Pipeline] sh 00:01:22.492 + git ls-remote http://dpdk.org/git/dpdk main 00:01:25.043 [Pipeline] } 00:01:25.068 [Pipeline] // retry 00:01:25.075 [Pipeline] } 00:01:25.099 [Pipeline] // withCredentials 00:01:25.108 [Pipeline] nodesByLabel 00:01:25.110 Could not find any nodes with 'sorcerer' label 00:01:25.116 [Pipeline] retry 00:01:25.118 [Pipeline] { 00:01:25.141 [Pipeline] checkout 00:01:25.149 The recommended git tool is: NONE 00:01:25.161 using credential 00000000-0000-0000-0000-000000000004 00:01:25.167 Cloning the remote Git repository 00:01:25.170 Honoring refspec on initial clone 00:01:25.175 Cloning repository http://dpdk.org/git/dpdk 00:01:25.175 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk # timeout=10 00:01:25.180 Using reference repository: /var/ci_repos/dpdk.git 00:01:25.181 Fetching upstream changes from http://dpdk.org/git/dpdk 00:01:25.181 > git --version # timeout=10 00:01:25.183 > git --version # 'git version 2.43.0' 00:01:25.183 using GIT_ASKPASS to set credentials SPDKCI GITHUB TOKEN 00:01:25.184 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.184 > git fetch --tags --force --progress -- http://dpdk.org/git/dpdk main # timeout=10 00:01:57.451 Avoid second fetch 00:01:57.466 Checking out Revision 7e06c0de1952d3109a5b0c4779d7e7d8059c9d78 (FETCH_HEAD) 00:01:57.437 > git config remote.origin.url http://dpdk.org/git/dpdk # timeout=10 00:01:57.442 > git config --add remote.origin.fetch main # timeout=10 00:01:57.455 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:57.463 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:57.470 > git config core.sparsecheckout # timeout=10 00:01:57.473 > git checkout -f 7e06c0de1952d3109a5b0c4779d7e7d8059c9d78 # timeout=10 00:01:58.358 Commit message: "examples: move alignment attribute on types for MSVC" 00:01:58.361 > git rev-list --no-walk 7e06c0de1952d3109a5b0c4779d7e7d8059c9d78 # timeout=10 00:01:58.394 [Pipeline] } 00:01:58.414 [Pipeline] // retry 00:01:58.422 [Pipeline] sh 00:01:58.705 + git -C dpdk log --oneline -n5 00:01:58.705 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:58.705 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:58.705 0efea35a2b app: move alignment attribute on types for MSVC 00:01:58.705 e2e546ab5b version: 24.07-rc0 00:01:58.705 a9778aad62 version: 24.03.0 00:01:58.718 [Pipeline] } 00:01:58.735 [Pipeline] // stage 00:01:58.744 [Pipeline] stage 00:01:58.746 [Pipeline] { (Prepare) 00:01:58.766 [Pipeline] writeFile 00:01:58.779 [Pipeline] sh 00:01:59.058 + logger -p user.info -t JENKINS-CI 00:01:59.072 [Pipeline] sh 00:01:59.356 + logger -p user.info -t JENKINS-CI 00:01:59.370 [Pipeline] sh 00:01:59.655 + cat autorun-spdk.conf 00:01:59.656 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.656 SPDK_TEST_NVMF=1 00:01:59.656 SPDK_TEST_NVME_CLI=1 00:01:59.656 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.656 SPDK_TEST_NVMF_NICS=e810 00:01:59.656 SPDK_TEST_VFIOUSER=1 00:01:59.656 SPDK_RUN_UBSAN=1 00:01:59.656 NET_TYPE=phy 00:01:59.656 SPDK_TEST_NATIVE_DPDK=main 00:01:59.656 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:59.664 RUN_NIGHTLY=1 00:01:59.669 [Pipeline] readFile 00:01:59.695 [Pipeline] withEnv 00:01:59.697 [Pipeline] { 00:01:59.711 [Pipeline] sh 00:01:59.997 + set -ex 00:01:59.997 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:59.997 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:59.997 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.997 ++ SPDK_TEST_NVMF=1 00:01:59.997 ++ SPDK_TEST_NVME_CLI=1 00:01:59.997 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.997 ++ SPDK_TEST_NVMF_NICS=e810 00:01:59.997 ++ SPDK_TEST_VFIOUSER=1 00:01:59.997 ++ SPDK_RUN_UBSAN=1 00:01:59.997 ++ NET_TYPE=phy 00:01:59.997 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:59.997 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:59.997 ++ RUN_NIGHTLY=1 00:01:59.997 + case $SPDK_TEST_NVMF_NICS in 00:01:59.997 + DRIVERS=ice 00:01:59.997 + [[ tcp == \r\d\m\a ]] 00:01:59.997 + [[ -n ice ]] 00:01:59.997 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:59.997 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:59.997 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:59.997 rmmod: ERROR: Module irdma is not currently loaded 00:01:59.997 rmmod: ERROR: Module i40iw is not currently loaded 00:01:59.997 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:59.997 + true 00:01:59.997 + for D in $DRIVERS 00:01:59.997 + sudo modprobe ice 00:01:59.997 + exit 0 00:02:00.007 [Pipeline] } 00:02:00.025 [Pipeline] // withEnv 00:02:00.031 [Pipeline] } 00:02:00.047 [Pipeline] // stage 00:02:00.055 [Pipeline] catchError 00:02:00.057 [Pipeline] { 00:02:00.071 [Pipeline] timeout 00:02:00.072 Timeout set to expire in 40 min 00:02:00.073 [Pipeline] { 00:02:00.089 [Pipeline] stage 00:02:00.091 [Pipeline] { (Tests) 00:02:00.106 [Pipeline] sh 00:02:00.391 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.392 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.392 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.392 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:00.392 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.392 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:00.392 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:00.392 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:00.392 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:00.392 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:00.392 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:00.392 + source /etc/os-release 00:02:00.392 ++ NAME='Fedora Linux' 00:02:00.392 ++ VERSION='38 (Cloud Edition)' 00:02:00.392 ++ ID=fedora 00:02:00.392 ++ VERSION_ID=38 00:02:00.392 ++ VERSION_CODENAME= 00:02:00.392 ++ PLATFORM_ID=platform:f38 00:02:00.392 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:00.392 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.392 ++ LOGO=fedora-logo-icon 00:02:00.392 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:00.392 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.392 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:00.392 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.392 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.392 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.392 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:00.392 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.392 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:00.392 ++ SUPPORT_END=2024-05-14 00:02:00.392 ++ VARIANT='Cloud Edition' 00:02:00.392 ++ VARIANT_ID=cloud 00:02:00.392 + uname -a 00:02:00.392 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:00.392 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:01.770 Hugepages 00:02:01.770 node hugesize free / total 00:02:01.770 node0 1048576kB 0 / 0 00:02:01.770 node0 2048kB 0 / 0 00:02:01.770 node1 1048576kB 0 / 0 00:02:01.770 node1 2048kB 0 / 0 00:02:01.770 00:02:01.770 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:01.770 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:01.770 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:01.770 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:01.770 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:01.770 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:01.770 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:01.770 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:01.771 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:01.771 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:01.771 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:01.771 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:01.771 + rm -f /tmp/spdk-ld-path 00:02:01.771 + source autorun-spdk.conf 00:02:01.771 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.771 ++ SPDK_TEST_NVMF=1 00:02:01.771 ++ SPDK_TEST_NVME_CLI=1 00:02:01.771 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.771 ++ SPDK_TEST_NVMF_NICS=e810 00:02:01.771 ++ SPDK_TEST_VFIOUSER=1 00:02:01.771 ++ SPDK_RUN_UBSAN=1 00:02:01.771 ++ NET_TYPE=phy 00:02:01.771 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:01.771 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.771 ++ RUN_NIGHTLY=1 00:02:01.771 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:01.771 + [[ -n '' ]] 00:02:01.771 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.771 + for M in /var/spdk/build-*-manifest.txt 00:02:01.771 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:01.771 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.771 + for M in /var/spdk/build-*-manifest.txt 00:02:01.771 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:01.771 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.771 ++ uname 00:02:01.771 + [[ Linux == \L\i\n\u\x ]] 00:02:01.771 + sudo dmesg -T 00:02:01.771 + sudo dmesg --clear 00:02:01.771 + dmesg_pid=1655089 00:02:01.771 + [[ Fedora Linux == FreeBSD ]] 00:02:01.771 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.771 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.771 + sudo dmesg -Tw 00:02:01.771 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:01.771 + [[ -x /usr/src/fio-static/fio ]] 00:02:01.771 + export FIO_BIN=/usr/src/fio-static/fio 00:02:01.771 + FIO_BIN=/usr/src/fio-static/fio 00:02:01.771 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:01.771 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:01.771 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:01.771 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.771 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.771 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:01.771 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.771 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.771 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:01.771 Test configuration: 00:02:01.771 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.771 SPDK_TEST_NVMF=1 00:02:01.771 SPDK_TEST_NVME_CLI=1 00:02:01.771 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.771 SPDK_TEST_NVMF_NICS=e810 00:02:01.771 SPDK_TEST_VFIOUSER=1 00:02:01.771 SPDK_RUN_UBSAN=1 00:02:01.771 NET_TYPE=phy 00:02:01.771 SPDK_TEST_NATIVE_DPDK=main 00:02:01.771 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.771 RUN_NIGHTLY=1 04:57:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:01.771 04:57:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.771 04:57:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.771 04:57:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.771 04:57:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.771 04:57:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.771 04:57:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.771 04:57:38 -- paths/export.sh@5 -- $ export PATH 00:02:01.771 04:57:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.771 04:57:38 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:01.771 04:57:38 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:01.771 04:57:38 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713927458.XXXXXX 00:02:01.771 04:57:38 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713927458.3s3nS5 00:02:01.771 04:57:38 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:01.771 04:57:38 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:02:01.771 04:57:38 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.771 04:57:38 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:01.771 04:57:38 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:01.771 04:57:38 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.771 04:57:38 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:01.771 04:57:38 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:02:01.771 04:57:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.771 04:57:38 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:01.771 04:57:38 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:02:01.771 04:57:38 -- pm/common@17 -- $ local monitor 00:02:01.771 04:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.771 04:57:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1655125 00:02:01.771 04:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.771 04:57:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1655127 00:02:01.771 04:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.771 04:57:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1655129 00:02:01.771 04:57:38 -- pm/common@21 -- $ date +%s 00:02:01.771 04:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.771 04:57:38 -- pm/common@21 -- $ date +%s 00:02:01.771 04:57:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1655131 00:02:01.771 04:57:38 -- pm/common@21 -- $ date +%s 00:02:01.771 04:57:38 -- pm/common@26 -- $ sleep 1 00:02:01.771 04:57:38 -- pm/common@21 -- $ date +%s 00:02:01.771 04:57:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713927458 00:02:01.771 04:57:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713927458 00:02:01.771 04:57:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713927458 00:02:01.771 04:57:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713927458 00:02:01.771 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713927458_collect-bmc-pm.bmc.pm.log 00:02:01.771 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713927458_collect-vmstat.pm.log 00:02:01.771 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713927458_collect-cpu-temp.pm.log 00:02:01.771 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713927458_collect-cpu-load.pm.log 00:02:02.710 04:57:39 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:02:02.710 04:57:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.710 04:57:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.710 04:57:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.710 04:57:39 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.710 Wed Apr 24 02:57:39 AM UTC 2024 00:02:02.710 04:57:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.710 v24.05-pre-437-g3f2c8979187 00:02:02.710 04:57:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:02.710 04:57:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.710 04:57:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.710 04:57:39 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:02.710 04:57:39 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:02.710 04:57:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.969 ************************************ 00:02:02.970 START TEST ubsan 00:02:02.970 ************************************ 00:02:02.970 04:57:40 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:02:02.970 using ubsan 00:02:02.970 00:02:02.970 real 0m0.000s 00:02:02.970 user 0m0.000s 00:02:02.970 sys 0m0.000s 00:02:02.970 04:57:40 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:02.970 04:57:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.970 ************************************ 00:02:02.970 END TEST ubsan 00:02:02.970 ************************************ 00:02:02.970 04:57:40 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:02.970 04:57:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:02.970 04:57:40 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:02.970 04:57:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:02.970 04:57:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.970 ************************************ 00:02:02.970 START TEST build_native_dpdk 00:02:02.970 ************************************ 00:02:02.970 04:57:40 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:02.970 04:57:40 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:02.970 04:57:40 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:02.970 04:57:40 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:02.970 04:57:40 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:02.970 04:57:40 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:02.970 04:57:40 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:02.970 04:57:40 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:02.970 04:57:40 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:02.970 04:57:40 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:02.970 04:57:40 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:02.970 04:57:40 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.970 04:57:40 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.970 04:57:40 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:02.970 7e06c0de19 examples: move alignment attribute on types for MSVC 00:02:02.970 27595cd830 drivers: move alignment attribute on types for MSVC 00:02:02.970 0efea35a2b app: move alignment attribute on types for MSVC 00:02:02.970 e2e546ab5b version: 24.07-rc0 00:02:02.970 a9778aad62 version: 24.03.0 00:02:02.970 04:57:40 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:02.970 04:57:40 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:02.970 04:57:40 -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:02:02.970 04:57:40 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:02.970 04:57:40 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:02.970 04:57:40 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:02.970 04:57:40 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:02.970 04:57:40 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:02.970 04:57:40 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:02.970 04:57:40 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:02.970 04:57:40 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:02.970 04:57:40 -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:02:02.970 04:57:40 -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:02:02.970 04:57:40 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:02.970 04:57:40 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:02.970 04:57:40 -- scripts/common.sh@333 -- $ IFS=.-: 00:02:02.970 04:57:40 -- scripts/common.sh@333 -- $ read -ra ver1 00:02:02.970 04:57:40 -- scripts/common.sh@334 -- $ IFS=.-: 00:02:02.970 04:57:40 -- scripts/common.sh@334 -- $ read -ra ver2 00:02:02.970 04:57:40 -- scripts/common.sh@335 -- $ local 'op=<' 00:02:02.970 04:57:40 -- scripts/common.sh@337 -- $ ver1_l=4 00:02:02.970 04:57:40 -- scripts/common.sh@338 -- $ ver2_l=3 00:02:02.970 04:57:40 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:02.970 04:57:40 -- scripts/common.sh@341 -- $ case "$op" in 00:02:02.970 04:57:40 -- scripts/common.sh@342 -- $ : 1 00:02:02.970 04:57:40 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:02.970 04:57:40 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.970 04:57:40 -- scripts/common.sh@362 -- $ decimal 24 00:02:02.970 04:57:40 -- scripts/common.sh@350 -- $ local d=24 00:02:02.970 04:57:40 -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:02.970 04:57:40 -- scripts/common.sh@352 -- $ echo 24 00:02:02.970 04:57:40 -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:02.970 04:57:40 -- scripts/common.sh@363 -- $ decimal 21 00:02:02.970 04:57:40 -- scripts/common.sh@350 -- $ local d=21 00:02:02.970 04:57:40 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:02.970 04:57:40 -- scripts/common.sh@352 -- $ echo 21 00:02:02.970 04:57:40 -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:02.970 04:57:40 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:02.970 04:57:40 -- scripts/common.sh@364 -- $ return 1 00:02:02.970 04:57:40 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:02.970 patching file config/rte_config.h 00:02:02.970 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:02.970 04:57:40 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:02.970 04:57:40 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:02.970 04:57:40 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:02.970 04:57:40 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:02.970 04:57:40 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:07.183 The Meson build system 00:02:07.183 Version: 1.3.1 00:02:07.183 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.183 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:07.183 Build type: native build 00:02:07.183 Program cat found: YES (/usr/bin/cat) 00:02:07.183 Project name: DPDK 00:02:07.183 Project version: 24.07.0-rc0 00:02:07.183 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.183 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:07.183 Host machine cpu family: x86_64 00:02:07.183 Host machine cpu: x86_64 00:02:07.183 Message: ## Building in Developer Mode ## 00:02:07.183 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.183 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:07.183 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.183 Program python3 found: YES (/usr/bin/python3) 00:02:07.183 Program cat found: YES (/usr/bin/cat) 00:02:07.183 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:07.183 Compiler for C supports arguments -march=native: YES 00:02:07.183 Checking for size of "void *" : 8 00:02:07.183 Checking for size of "void *" : 8 (cached) 00:02:07.183 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.183 Library m found: YES 00:02:07.183 Library numa found: YES 00:02:07.183 Has header "numaif.h" : YES 00:02:07.183 Library fdt found: NO 00:02:07.183 Library execinfo found: NO 00:02:07.183 Has header "execinfo.h" : YES 00:02:07.183 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.183 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.183 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.183 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.184 Run-time dependency openssl found: YES 3.0.9 00:02:07.184 Run-time dependency libpcap found: YES 1.10.4 00:02:07.184 Has header "pcap.h" with dependency libpcap: YES 00:02:07.184 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.184 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.184 Compiler for C supports arguments -Wformat: YES 00:02:07.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.184 Compiler for C supports arguments -Wformat-security: NO 00:02:07.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.184 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.184 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.184 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.184 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.184 Compiler for C supports arguments -Wundef: YES 00:02:07.184 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.184 Program objdump found: YES (/usr/bin/objdump) 00:02:07.184 Compiler for C supports arguments -mavx512f: YES 00:02:07.184 Checking if "AVX512 checking" compiles: YES 00:02:07.184 Fetching value of define "__SSE4_2__" : 1 00:02:07.184 Fetching value of define "__AES__" : 1 00:02:07.184 Fetching value of define "__AVX__" : 1 00:02:07.184 Fetching value of define "__AVX2__" : (undefined) 00:02:07.184 Fetching value of define "__AVX512BW__" : (undefined) 00:02:07.184 Fetching value of define "__AVX512CD__" : (undefined) 00:02:07.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:07.184 Fetching value of define "__AVX512F__" : (undefined) 00:02:07.184 Fetching value of define "__AVX512VL__" : (undefined) 00:02:07.184 Fetching value of define "__PCLMUL__" : 1 00:02:07.184 Fetching value of define "__RDRND__" : 1 00:02:07.184 Fetching value of define "__RDSEED__" : (undefined) 00:02:07.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.184 Message: lib/log: Defining dependency "log" 00:02:07.184 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.184 Message: lib/argparse: Defining dependency "argparse" 00:02:07.184 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.184 Checking for function "getentropy" : NO 00:02:07.184 Message: lib/eal: Defining dependency "eal" 00:02:07.184 Message: lib/ring: Defining dependency "ring" 00:02:07.184 Message: lib/rcu: Defining dependency "rcu" 00:02:07.184 Message: lib/mempool: Defining dependency "mempool" 00:02:07.184 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.184 Compiler for C supports arguments -mpclmul: YES 00:02:07.184 Compiler for C supports arguments -maes: YES 00:02:07.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.184 Compiler for C supports arguments -mavx512bw: YES 00:02:07.184 Compiler for C supports arguments -mavx512dq: YES 00:02:07.184 Compiler for C supports arguments -mavx512vl: YES 00:02:07.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.184 Compiler for C supports arguments -mavx2: YES 00:02:07.184 Compiler for C supports arguments -mavx: YES 00:02:07.184 Message: lib/net: Defining dependency "net" 00:02:07.184 Message: lib/meter: Defining dependency "meter" 00:02:07.184 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.184 Message: lib/pci: Defining dependency "pci" 00:02:07.184 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.184 Message: lib/metrics: Defining dependency "metrics" 00:02:07.184 Message: lib/hash: Defining dependency "hash" 00:02:07.184 Message: lib/timer: Defining dependency "timer" 00:02:07.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:07.184 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:07.184 Message: lib/acl: Defining dependency "acl" 00:02:07.184 Message: lib/bbdev: Defining dependency "bbdev" 00:02:07.184 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:07.184 Run-time dependency libelf found: YES 0.190 00:02:07.184 Message: lib/bpf: Defining dependency "bpf" 00:02:07.184 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:07.184 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.184 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.184 Message: lib/distributor: Defining dependency "distributor" 00:02:07.184 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.184 Message: lib/efd: Defining dependency "efd" 00:02:07.184 Message: lib/eventdev: Defining dependency "eventdev" 00:02:07.184 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:07.184 Message: lib/gpudev: Defining dependency "gpudev" 00:02:07.184 Message: lib/gro: Defining dependency "gro" 00:02:07.184 Message: lib/gso: Defining dependency "gso" 00:02:07.184 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:07.184 Message: lib/jobstats: Defining dependency "jobstats" 00:02:07.184 Message: lib/latencystats: Defining dependency "latencystats" 00:02:07.184 Message: lib/lpm: Defining dependency "lpm" 00:02:07.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:07.184 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:07.184 Message: lib/member: Defining dependency "member" 00:02:07.184 Message: lib/pcapng: Defining dependency "pcapng" 00:02:07.184 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.184 Message: lib/power: Defining dependency "power" 00:02:07.184 Message: lib/rawdev: Defining dependency "rawdev" 00:02:07.184 Message: lib/regexdev: Defining dependency "regexdev" 00:02:07.184 Message: lib/mldev: Defining dependency "mldev" 00:02:07.184 Message: lib/rib: Defining dependency "rib" 00:02:07.184 Message: lib/reorder: Defining dependency "reorder" 00:02:07.184 Message: lib/sched: Defining dependency "sched" 00:02:07.184 Message: lib/security: Defining dependency "security" 00:02:07.184 Message: lib/stack: Defining dependency "stack" 00:02:07.184 Has header "linux/userfaultfd.h" : YES 00:02:07.184 Has header "linux/vduse.h" : YES 00:02:07.184 Message: lib/vhost: Defining dependency "vhost" 00:02:07.184 Message: lib/ipsec: Defining dependency "ipsec" 00:02:07.184 Message: lib/pdcp: Defining dependency "pdcp" 00:02:07.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.184 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:07.184 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:07.184 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.184 Message: lib/fib: Defining dependency "fib" 00:02:07.184 Message: lib/port: Defining dependency "port" 00:02:07.184 Message: lib/pdump: Defining dependency "pdump" 00:02:07.184 Message: lib/table: Defining dependency "table" 00:02:07.184 Message: lib/pipeline: Defining dependency "pipeline" 00:02:07.184 Message: lib/graph: Defining dependency "graph" 00:02:07.184 Message: lib/node: Defining dependency "node" 00:02:07.184 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.575 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.575 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.575 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.575 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:08.575 Compiler for C supports arguments -Wno-unused-value: YES 00:02:08.575 Compiler for C supports arguments -Wno-format: YES 00:02:08.575 Compiler for C supports arguments -Wno-format-security: YES 00:02:08.575 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:08.575 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:08.575 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:08.575 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:08.575 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.575 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.575 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:08.575 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:08.575 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:08.575 Has header "sys/epoll.h" : YES 00:02:08.575 Program doxygen found: YES (/usr/bin/doxygen) 00:02:08.575 Configuring doxy-api-html.conf using configuration 00:02:08.575 Configuring doxy-api-man.conf using configuration 00:02:08.575 Program mandb found: YES (/usr/bin/mandb) 00:02:08.575 Program sphinx-build found: NO 00:02:08.575 Configuring rte_build_config.h using configuration 00:02:08.575 Message: 00:02:08.575 ================= 00:02:08.575 Applications Enabled 00:02:08.575 ================= 00:02:08.575 00:02:08.575 apps: 00:02:08.575 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:08.575 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:08.575 test-pmd, test-regex, test-sad, test-security-perf, 00:02:08.575 00:02:08.575 Message: 00:02:08.575 ================= 00:02:08.575 Libraries Enabled 00:02:08.575 ================= 00:02:08.575 00:02:08.575 libs: 00:02:08.575 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:02:08.575 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:02:08.575 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:02:08.575 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:02:08.575 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:02:08.575 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:02:08.575 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:02:08.575 node, 00:02:08.575 00:02:08.575 Message: 00:02:08.575 =============== 00:02:08.575 Drivers Enabled 00:02:08.575 =============== 00:02:08.575 00:02:08.575 common: 00:02:08.575 00:02:08.575 bus: 00:02:08.575 pci, vdev, 00:02:08.575 mempool: 00:02:08.575 ring, 00:02:08.575 dma: 00:02:08.575 00:02:08.575 net: 00:02:08.575 i40e, 00:02:08.575 raw: 00:02:08.575 00:02:08.575 crypto: 00:02:08.575 00:02:08.575 compress: 00:02:08.575 00:02:08.575 regex: 00:02:08.575 00:02:08.575 ml: 00:02:08.575 00:02:08.575 vdpa: 00:02:08.575 00:02:08.575 event: 00:02:08.575 00:02:08.575 baseband: 00:02:08.575 00:02:08.575 gpu: 00:02:08.575 00:02:08.575 00:02:08.575 Message: 00:02:08.575 ================= 00:02:08.575 Content Skipped 00:02:08.575 ================= 00:02:08.575 00:02:08.575 apps: 00:02:08.575 00:02:08.575 libs: 00:02:08.575 00:02:08.575 drivers: 00:02:08.575 common/cpt: not in enabled drivers build config 00:02:08.575 common/dpaax: not in enabled drivers build config 00:02:08.575 common/iavf: not in enabled drivers build config 00:02:08.575 common/idpf: not in enabled drivers build config 00:02:08.575 common/ionic: not in enabled drivers build config 00:02:08.575 common/mvep: not in enabled drivers build config 00:02:08.575 common/octeontx: not in enabled drivers build config 00:02:08.575 bus/auxiliary: not in enabled drivers build config 00:02:08.575 bus/cdx: not in enabled drivers build config 00:02:08.575 bus/dpaa: not in enabled drivers build config 00:02:08.575 bus/fslmc: not in enabled drivers build config 00:02:08.575 bus/ifpga: not in enabled drivers build config 00:02:08.575 bus/platform: not in enabled drivers build config 00:02:08.575 bus/uacce: not in enabled drivers build config 00:02:08.575 bus/vmbus: not in enabled drivers build config 00:02:08.575 common/cnxk: not in enabled drivers build config 00:02:08.575 common/mlx5: not in enabled drivers build config 00:02:08.575 common/nfp: not in enabled drivers build config 00:02:08.575 common/nitrox: not in enabled drivers build config 00:02:08.575 common/qat: not in enabled drivers build config 00:02:08.575 common/sfc_efx: not in enabled drivers build config 00:02:08.575 mempool/bucket: not in enabled drivers build config 00:02:08.575 mempool/cnxk: not in enabled drivers build config 00:02:08.575 mempool/dpaa: not in enabled drivers build config 00:02:08.575 mempool/dpaa2: not in enabled drivers build config 00:02:08.575 mempool/octeontx: not in enabled drivers build config 00:02:08.575 mempool/stack: not in enabled drivers build config 00:02:08.575 dma/cnxk: not in enabled drivers build config 00:02:08.575 dma/dpaa: not in enabled drivers build config 00:02:08.575 dma/dpaa2: not in enabled drivers build config 00:02:08.575 dma/hisilicon: not in enabled drivers build config 00:02:08.575 dma/idxd: not in enabled drivers build config 00:02:08.575 dma/ioat: not in enabled drivers build config 00:02:08.575 dma/skeleton: not in enabled drivers build config 00:02:08.575 net/af_packet: not in enabled drivers build config 00:02:08.575 net/af_xdp: not in enabled drivers build config 00:02:08.575 net/ark: not in enabled drivers build config 00:02:08.575 net/atlantic: not in enabled drivers build config 00:02:08.575 net/avp: not in enabled drivers build config 00:02:08.575 net/axgbe: not in enabled drivers build config 00:02:08.575 net/bnx2x: not in enabled drivers build config 00:02:08.575 net/bnxt: not in enabled drivers build config 00:02:08.575 net/bonding: not in enabled drivers build config 00:02:08.575 net/cnxk: not in enabled drivers build config 00:02:08.575 net/cpfl: not in enabled drivers build config 00:02:08.575 net/cxgbe: not in enabled drivers build config 00:02:08.576 net/dpaa: not in enabled drivers build config 00:02:08.576 net/dpaa2: not in enabled drivers build config 00:02:08.576 net/e1000: not in enabled drivers build config 00:02:08.576 net/ena: not in enabled drivers build config 00:02:08.576 net/enetc: not in enabled drivers build config 00:02:08.576 net/enetfec: not in enabled drivers build config 00:02:08.576 net/enic: not in enabled drivers build config 00:02:08.576 net/failsafe: not in enabled drivers build config 00:02:08.576 net/fm10k: not in enabled drivers build config 00:02:08.576 net/gve: not in enabled drivers build config 00:02:08.576 net/hinic: not in enabled drivers build config 00:02:08.576 net/hns3: not in enabled drivers build config 00:02:08.576 net/iavf: not in enabled drivers build config 00:02:08.576 net/ice: not in enabled drivers build config 00:02:08.576 net/idpf: not in enabled drivers build config 00:02:08.576 net/igc: not in enabled drivers build config 00:02:08.576 net/ionic: not in enabled drivers build config 00:02:08.576 net/ipn3ke: not in enabled drivers build config 00:02:08.576 net/ixgbe: not in enabled drivers build config 00:02:08.576 net/mana: not in enabled drivers build config 00:02:08.576 net/memif: not in enabled drivers build config 00:02:08.576 net/mlx4: not in enabled drivers build config 00:02:08.576 net/mlx5: not in enabled drivers build config 00:02:08.576 net/mvneta: not in enabled drivers build config 00:02:08.576 net/mvpp2: not in enabled drivers build config 00:02:08.576 net/netvsc: not in enabled drivers build config 00:02:08.576 net/nfb: not in enabled drivers build config 00:02:08.576 net/nfp: not in enabled drivers build config 00:02:08.576 net/ngbe: not in enabled drivers build config 00:02:08.576 net/null: not in enabled drivers build config 00:02:08.576 net/octeontx: not in enabled drivers build config 00:02:08.576 net/octeon_ep: not in enabled drivers build config 00:02:08.576 net/pcap: not in enabled drivers build config 00:02:08.576 net/pfe: not in enabled drivers build config 00:02:08.576 net/qede: not in enabled drivers build config 00:02:08.576 net/ring: not in enabled drivers build config 00:02:08.576 net/sfc: not in enabled drivers build config 00:02:08.576 net/softnic: not in enabled drivers build config 00:02:08.576 net/tap: not in enabled drivers build config 00:02:08.576 net/thunderx: not in enabled drivers build config 00:02:08.576 net/txgbe: not in enabled drivers build config 00:02:08.576 net/vdev_netvsc: not in enabled drivers build config 00:02:08.576 net/vhost: not in enabled drivers build config 00:02:08.576 net/virtio: not in enabled drivers build config 00:02:08.576 net/vmxnet3: not in enabled drivers build config 00:02:08.576 raw/cnxk_bphy: not in enabled drivers build config 00:02:08.576 raw/cnxk_gpio: not in enabled drivers build config 00:02:08.576 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:08.576 raw/ifpga: not in enabled drivers build config 00:02:08.576 raw/ntb: not in enabled drivers build config 00:02:08.576 raw/skeleton: not in enabled drivers build config 00:02:08.576 crypto/armv8: not in enabled drivers build config 00:02:08.576 crypto/bcmfs: not in enabled drivers build config 00:02:08.576 crypto/caam_jr: not in enabled drivers build config 00:02:08.576 crypto/ccp: not in enabled drivers build config 00:02:08.576 crypto/cnxk: not in enabled drivers build config 00:02:08.576 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.576 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.576 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.576 crypto/mlx5: not in enabled drivers build config 00:02:08.576 crypto/mvsam: not in enabled drivers build config 00:02:08.576 crypto/nitrox: not in enabled drivers build config 00:02:08.576 crypto/null: not in enabled drivers build config 00:02:08.576 crypto/octeontx: not in enabled drivers build config 00:02:08.576 crypto/openssl: not in enabled drivers build config 00:02:08.576 crypto/scheduler: not in enabled drivers build config 00:02:08.576 crypto/uadk: not in enabled drivers build config 00:02:08.576 crypto/virtio: not in enabled drivers build config 00:02:08.576 compress/isal: not in enabled drivers build config 00:02:08.576 compress/mlx5: not in enabled drivers build config 00:02:08.576 compress/nitrox: not in enabled drivers build config 00:02:08.576 compress/octeontx: not in enabled drivers build config 00:02:08.576 compress/zlib: not in enabled drivers build config 00:02:08.576 regex/mlx5: not in enabled drivers build config 00:02:08.576 regex/cn9k: not in enabled drivers build config 00:02:08.576 ml/cnxk: not in enabled drivers build config 00:02:08.576 vdpa/ifc: not in enabled drivers build config 00:02:08.576 vdpa/mlx5: not in enabled drivers build config 00:02:08.576 vdpa/nfp: not in enabled drivers build config 00:02:08.576 vdpa/sfc: not in enabled drivers build config 00:02:08.576 event/cnxk: not in enabled drivers build config 00:02:08.576 event/dlb2: not in enabled drivers build config 00:02:08.576 event/dpaa: not in enabled drivers build config 00:02:08.576 event/dpaa2: not in enabled drivers build config 00:02:08.576 event/dsw: not in enabled drivers build config 00:02:08.576 event/opdl: not in enabled drivers build config 00:02:08.576 event/skeleton: not in enabled drivers build config 00:02:08.576 event/sw: not in enabled drivers build config 00:02:08.576 event/octeontx: not in enabled drivers build config 00:02:08.576 baseband/acc: not in enabled drivers build config 00:02:08.576 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:08.576 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:08.576 baseband/la12xx: not in enabled drivers build config 00:02:08.576 baseband/null: not in enabled drivers build config 00:02:08.576 baseband/turbo_sw: not in enabled drivers build config 00:02:08.576 gpu/cuda: not in enabled drivers build config 00:02:08.576 00:02:08.576 00:02:08.576 Build targets in project: 224 00:02:08.576 00:02:08.576 DPDK 24.07.0-rc0 00:02:08.576 00:02:08.576 User defined options 00:02:08.576 libdir : lib 00:02:08.576 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.576 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:08.576 c_link_args : 00:02:08.576 enable_docs : false 00:02:08.576 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:08.576 enable_kmods : false 00:02:08.576 machine : native 00:02:08.576 tests : false 00:02:08.576 00:02:08.576 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.576 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:08.576 04:57:45 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:08.576 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:08.576 [1/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:08.576 [2/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.576 [3/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.576 [4/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.576 [5/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.576 [6/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.576 [7/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.576 [8/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.576 [9/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.576 [10/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.576 [11/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.576 [12/722] Linking static target lib/librte_kvargs.a 00:02:08.835 [13/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.836 [14/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.836 [15/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.836 [16/722] Linking static target lib/librte_log.a 00:02:09.096 [17/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:09.096 [18/722] Linking static target lib/librte_argparse.a 00:02:09.096 [19/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.360 [20/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.625 [21/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.625 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.625 [23/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.625 [24/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.625 [25/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.625 [26/722] Linking target lib/librte_log.so.24.2 00:02:09.625 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.625 [28/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.625 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.625 [30/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.625 [31/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.625 [32/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.625 [33/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.625 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.625 [35/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.625 [36/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.625 [37/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.625 [38/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.625 [39/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.625 [40/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.625 [41/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.625 [42/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.625 [43/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.885 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.885 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.885 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.885 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.885 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.885 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.885 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.885 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.885 [52/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:09.885 [53/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.885 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.885 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.885 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.885 [57/722] Linking target lib/librte_kvargs.so.24.2 00:02:09.885 [58/722] Linking target lib/librte_argparse.so.24.2 00:02:09.885 [59/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.885 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.885 [61/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.885 [62/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.145 [63/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.145 [64/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:10.145 [65/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.145 [66/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.406 [67/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.406 [68/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.406 [69/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.406 [70/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.406 [71/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.406 [72/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.669 [73/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.669 [74/722] Linking static target lib/librte_pci.a 00:02:10.669 [75/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.669 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.669 [77/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.669 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:10.669 [79/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.933 [80/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:10.933 [81/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.933 [82/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.933 [83/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.933 [84/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.933 [85/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.933 [86/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:10.933 [87/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.933 [88/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.933 [89/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.933 [90/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.933 [91/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.933 [92/722] Linking static target lib/librte_ring.a 00:02:10.933 [93/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.933 [94/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.933 [95/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.933 [96/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.933 [97/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.933 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:10.933 [99/722] Linking static target lib/librte_meter.a 00:02:11.194 [100/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.194 [101/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.194 [102/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.194 [103/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.194 [104/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.194 [105/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.194 [106/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.194 [107/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.194 [108/722] Linking static target lib/librte_telemetry.a 00:02:11.194 [109/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.194 [110/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.194 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.194 [112/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.455 [113/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.455 [114/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.455 [115/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.455 [116/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.455 [117/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.455 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.455 [119/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.455 [120/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.719 [121/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.719 [122/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.719 [123/722] Linking static target lib/librte_net.a 00:02:11.719 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.719 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.719 [126/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.719 [127/722] Linking static target lib/librte_mempool.a 00:02:11.719 [128/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.982 [129/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.982 [130/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.982 [131/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.982 [132/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.982 [133/722] Linking target lib/librte_telemetry.so.24.2 00:02:11.982 [134/722] Linking static target lib/librte_eal.a 00:02:11.982 [135/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.982 [136/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.982 [137/722] Linking static target lib/librte_cmdline.a 00:02:11.982 [138/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.244 [139/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.244 [140/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.244 [141/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:12.244 [142/722] Linking static target lib/librte_cfgfile.a 00:02:12.244 [143/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.244 [144/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:12.244 [145/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:12.244 [146/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.244 [147/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.244 [148/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:12.244 [149/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:12.244 [150/722] Linking static target lib/librte_metrics.a 00:02:12.503 [151/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.503 [152/722] Linking static target lib/librte_rcu.a 00:02:12.503 [153/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:12.503 [154/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:12.503 [155/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:12.503 [156/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:12.503 [157/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:12.503 [158/722] Linking static target lib/librte_bitratestats.a 00:02:12.802 [159/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:12.802 [160/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.802 [161/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.802 [162/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:12.802 [163/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.802 [164/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:12.802 [165/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.802 [166/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.802 [167/722] Linking static target lib/librte_timer.a 00:02:13.067 [168/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.067 [169/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:13.067 [170/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.067 [171/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.067 [172/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:13.068 [173/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:13.332 [174/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.332 [175/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.332 [176/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:13.332 [177/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:13.332 [178/722] Linking static target lib/librte_bbdev.a 00:02:13.332 [179/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.332 [180/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.332 [181/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.593 [182/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.593 [183/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:13.593 [184/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.593 [185/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.593 [186/722] Linking static target lib/librte_compressdev.a 00:02:13.593 [187/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:13.593 [188/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:13.854 [189/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:13.854 [190/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.115 [191/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:14.115 [192/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:14.115 [193/722] Linking static target lib/librte_distributor.a 00:02:14.115 [194/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:14.115 [195/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.115 [196/722] Linking static target lib/librte_dmadev.a 00:02:14.379 [197/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:14.379 [198/722] Linking static target lib/librte_bpf.a 00:02:14.379 [199/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:14.379 [200/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:14.379 [201/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.379 [202/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:14.379 [203/722] Linking static target lib/librte_dispatcher.a 00:02:14.379 [204/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.379 [205/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:14.379 [206/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:14.641 [207/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.641 [208/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:14.641 [209/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:14.641 [210/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:14.641 [211/722] Linking static target lib/librte_gpudev.a 00:02:14.641 [212/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:14.641 [213/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:14.641 [214/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:14.641 [215/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:14.641 [216/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:14.641 [217/722] Linking static target lib/librte_gro.a 00:02:14.641 [218/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.641 [219/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.641 [220/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.641 [221/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:14.641 [222/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.907 [223/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:14.907 [224/722] Linking static target lib/librte_jobstats.a 00:02:14.907 [225/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.907 [226/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:14.907 [227/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:15.169 [228/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.169 [229/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:15.169 [230/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.169 [231/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:15.169 [232/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:15.433 [233/722] Linking static target lib/librte_latencystats.a 00:02:15.433 [234/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.433 [235/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:15.433 [236/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:15.433 [237/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:15.433 [238/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:15.433 [239/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:15.433 [240/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:15.433 [241/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:15.433 [242/722] Linking static target lib/librte_ip_frag.a 00:02:15.698 [243/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:15.698 [244/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:15.698 [245/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:15.698 [246/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.698 [247/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.964 [248/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.964 [249/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:15.964 [250/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.964 [251/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.964 [252/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.964 [253/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:15.964 [254/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:16.224 [255/722] Linking static target lib/librte_gso.a 00:02:16.224 [256/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:16.224 [257/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:16.224 [258/722] Linking static target lib/librte_regexdev.a 00:02:16.224 [259/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.490 [260/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:16.490 [261/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:16.490 [262/722] Linking static target lib/librte_rawdev.a 00:02:16.490 [263/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:16.490 [264/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:16.490 [265/722] Linking static target lib/librte_efd.a 00:02:16.490 [266/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.490 [267/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:16.490 [268/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:16.490 [269/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:16.754 [270/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:16.754 [271/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:16.754 [272/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:16.754 [273/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:16.754 [274/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:16.754 [275/722] Linking static target lib/acl/libavx2_tmp.a 00:02:16.754 [276/722] Linking static target lib/librte_pcapng.a 00:02:16.754 [277/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:16.754 [278/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:16.754 [279/722] Linking static target lib/librte_stack.a 00:02:16.754 [280/722] Linking static target lib/librte_mldev.a 00:02:16.754 [281/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:16.754 [282/722] Linking static target lib/librte_lpm.a 00:02:16.754 [283/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.018 [284/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:17.018 [285/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.018 [286/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.018 [287/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.018 [288/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.018 [289/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.018 [290/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.018 [291/722] Linking static target lib/librte_hash.a 00:02:17.018 [292/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.018 [293/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:17.018 [294/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.297 [295/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:17.297 [296/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:17.297 [297/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.297 [298/722] Linking static target lib/acl/libavx512_tmp.a 00:02:17.297 [299/722] Linking static target lib/librte_acl.a 00:02:17.297 [300/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.297 [301/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.297 [302/722] Linking static target lib/librte_reorder.a 00:02:17.297 [303/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.297 [304/722] Linking static target lib/librte_security.a 00:02:17.297 [305/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.297 [306/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.569 [307/722] Linking static target lib/librte_power.a 00:02:17.569 [308/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.569 [309/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.569 [310/722] Linking static target lib/librte_mbuf.a 00:02:17.569 [311/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.833 [312/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:17.833 [313/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.833 [314/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:17.833 [315/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:17.833 [316/722] Linking static target lib/librte_rib.a 00:02:17.833 [317/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.833 [318/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.833 [319/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:17.833 [320/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.833 [321/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:17.833 [322/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:18.097 [323/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.097 [324/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:18.097 [325/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:18.097 [326/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.369 [327/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:18.369 [328/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:18.369 [329/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:18.369 [330/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:18.369 [331/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:18.369 [332/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:18.369 [333/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:18.369 [334/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.369 [335/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.632 [336/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.632 [337/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.632 [338/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:18.632 [339/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:18.632 [340/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:18.894 [341/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:19.153 [342/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:19.153 [343/722] Linking static target lib/librte_member.a 00:02:19.153 [344/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:19.153 [345/722] Linking static target lib/librte_eventdev.a 00:02:19.153 [346/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.153 [347/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.153 [348/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:19.414 [349/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:19.414 [350/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:19.414 [351/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.414 [352/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:19.414 [353/722] Linking static target lib/librte_cryptodev.a 00:02:19.414 [354/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:19.414 [355/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:19.414 [356/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:19.414 [357/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:19.414 [358/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:19.414 [359/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:19.414 [360/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.414 [361/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.414 [362/722] Linking static target lib/librte_ethdev.a 00:02:19.414 [363/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:19.835 [364/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:19.835 [365/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:19.835 [366/722] Linking static target lib/librte_sched.a 00:02:19.835 [367/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:19.835 [368/722] Linking static target lib/librte_fib.a 00:02:19.835 [369/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:19.835 [370/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:19.835 [371/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:19.835 [372/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:19.835 [373/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:19.835 [374/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:19.835 [375/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:20.093 [376/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:20.093 [377/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.093 [378/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:20.355 [379/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.355 [380/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.355 [381/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:20.355 [382/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:20.616 [383/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:20.616 [384/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:20.616 [385/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:20.616 [386/722] Linking static target lib/librte_pdump.a 00:02:20.616 [387/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:20.616 [388/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.616 [389/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:20.883 [390/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:20.883 [391/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:20.883 [392/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:20.883 [393/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:20.883 [394/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.883 [395/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:20.883 [396/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:20.883 [397/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:20.883 [398/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:20.883 [399/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [400/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:21.143 [401/722] Linking static target lib/librte_ipsec.a 00:02:21.143 [402/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:21.143 [403/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:21.143 [404/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:21.143 [405/722] Linking static target lib/librte_table.a 00:02:21.404 [406/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.404 [407/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:21.404 [408/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:21.404 [409/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:21.674 [410/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.942 [411/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:21.943 [412/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:21.943 [413/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:21.943 [414/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.943 [415/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:22.218 [416/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.218 [417/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.218 [418/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:22.218 [419/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.218 [420/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:22.218 [421/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.218 [422/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.218 [423/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.481 [424/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.481 [425/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:22.481 [426/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:22.481 [427/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.481 [428/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:22.481 [429/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.744 [430/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.744 [431/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.744 [432/722] Linking static target drivers/librte_bus_vdev.a 00:02:22.744 [433/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:22.744 [434/722] Linking static target lib/librte_port.a 00:02:22.744 [435/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:22.744 [436/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.009 [437/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:23.009 [438/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.009 [439/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:23.009 [440/722] Linking static target lib/librte_graph.a 00:02:23.009 [441/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.009 [442/722] Linking static target drivers/librte_bus_pci.a 00:02:23.009 [443/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.009 [444/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:23.009 [445/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.271 [446/722] Linking target lib/librte_eal.so.24.2 00:02:23.271 [447/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.271 [448/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:23.271 [449/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:23.531 [450/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:23.532 [451/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.532 [452/722] Linking target lib/librte_ring.so.24.2 00:02:23.532 [453/722] Linking target lib/librte_meter.so.24.2 00:02:23.532 [454/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.532 [455/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:23.800 [456/722] Linking target lib/librte_timer.so.24.2 00:02:23.800 [457/722] Linking target lib/librte_pci.so.24.2 00:02:23.800 [458/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:23.800 [459/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:23.800 [460/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:23.800 [461/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.800 [462/722] Linking target lib/librte_acl.so.24.2 00:02:23.800 [463/722] Linking target lib/librte_rcu.so.24.2 00:02:23.800 [464/722] Linking target lib/librte_mempool.so.24.2 00:02:23.800 [465/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:23.800 [466/722] Linking target lib/librte_cfgfile.so.24.2 00:02:24.060 [467/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:24.060 [468/722] Linking target lib/librte_dmadev.so.24.2 00:02:24.060 [469/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:24.060 [470/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:24.060 [471/722] Linking target lib/librte_jobstats.so.24.2 00:02:24.060 [472/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.060 [473/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:24.060 [474/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:24.060 [475/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.060 [476/722] Linking target lib/librte_rawdev.so.24.2 00:02:24.060 [477/722] Linking target lib/librte_stack.so.24.2 00:02:24.060 [478/722] Linking target drivers/librte_bus_pci.so.24.2 00:02:24.060 [479/722] Linking target drivers/librte_bus_vdev.so.24.2 00:02:24.060 [480/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:24.060 [481/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:24.060 [482/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:24.060 [483/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:24.334 [484/722] Linking target lib/librte_mbuf.so.24.2 00:02:24.334 [485/722] Linking target lib/librte_rib.so.24.2 00:02:24.334 [486/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:24.334 [487/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:24.334 [488/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:24.334 [489/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:24.334 [490/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:24.334 [491/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:24.334 [492/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:24.334 [493/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:24.334 [494/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.334 [495/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:24.334 [496/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:24.334 [497/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:24.334 [498/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:24.334 [499/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.334 [500/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:24.334 [501/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.593 [502/722] Linking target lib/librte_bbdev.so.24.2 00:02:24.593 [503/722] Linking target lib/librte_compressdev.so.24.2 00:02:24.593 [504/722] Linking target lib/librte_net.so.24.2 00:02:24.593 [505/722] Linking target lib/librte_distributor.so.24.2 00:02:24.593 [506/722] Linking target lib/librte_cryptodev.so.24.2 00:02:24.593 [507/722] Linking target lib/librte_gpudev.so.24.2 00:02:24.593 [508/722] Linking target lib/librte_regexdev.so.24.2 00:02:24.593 [509/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:24.593 [510/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:24.593 [511/722] Linking target lib/librte_mldev.so.24.2 00:02:24.593 [512/722] Linking target lib/librte_reorder.so.24.2 00:02:24.593 [513/722] Linking static target drivers/librte_mempool_ring.a 00:02:24.593 [514/722] Linking target lib/librte_sched.so.24.2 00:02:24.593 [515/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:24.593 [516/722] Linking target drivers/librte_mempool_ring.so.24.2 00:02:24.593 [517/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:24.593 [518/722] Linking target lib/librte_fib.so.24.2 00:02:24.593 [519/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:24.593 [520/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:24.855 [521/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:24.855 [522/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:24.855 [523/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:24.855 [524/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:24.855 [525/722] Linking target lib/librte_cmdline.so.24.2 00:02:24.855 [526/722] Linking target lib/librte_security.so.24.2 00:02:24.855 [527/722] Linking target lib/librte_hash.so.24.2 00:02:25.118 [528/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:25.118 [529/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:25.118 [530/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:25.118 [531/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:25.118 [532/722] Linking target lib/librte_efd.so.24.2 00:02:25.118 [533/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:25.118 [534/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:25.381 [535/722] Linking target lib/librte_member.so.24.2 00:02:25.381 [536/722] Linking target lib/librte_lpm.so.24.2 00:02:25.381 [537/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:25.381 [538/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:25.381 [539/722] Linking target lib/librte_ipsec.so.24.2 00:02:25.381 [540/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:25.381 [541/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:25.381 [542/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:25.381 [543/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:25.644 [544/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:25.644 [545/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:25.644 [546/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:25.644 [547/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:25.644 [548/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:25.644 [549/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:25.644 [550/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:25.905 [551/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:25.905 [552/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:25.905 [553/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:25.905 [554/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:26.168 [555/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:26.168 [556/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:26.168 [557/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:26.433 [558/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:26.433 [559/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:26.433 [560/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:26.433 [561/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:26.696 [562/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:26.696 [563/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:26.696 [564/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:26.696 [565/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:26.696 [566/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:26.696 [567/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:26.956 [568/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:26.956 [569/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:26.956 [570/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:27.219 [571/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:27.482 [572/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:27.482 [573/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:27.482 [574/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:27.482 [575/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:27.760 [576/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:27.760 [577/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:27.760 [578/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:27.760 [579/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:28.024 [580/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:28.024 [581/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.024 [582/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:28.024 [583/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:28.024 [584/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:28.024 [585/722] Linking target lib/librte_ethdev.so.24.2 00:02:28.286 [586/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:28.286 [587/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:28.286 [588/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:28.286 [589/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:28.286 [590/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:28.548 [591/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:28.548 [592/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:28.548 [593/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:28.548 [594/722] Linking target lib/librte_metrics.so.24.2 00:02:28.548 [595/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:28.548 [596/722] Linking target lib/librte_bpf.so.24.2 00:02:28.548 [597/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:28.548 [598/722] Linking target lib/librte_gro.so.24.2 00:02:28.548 [599/722] Linking target lib/librte_eventdev.so.24.2 00:02:28.548 [600/722] Linking target lib/librte_gso.so.24.2 00:02:28.548 [601/722] Linking target lib/librte_pcapng.so.24.2 00:02:28.548 [602/722] Linking target lib/librte_ip_frag.so.24.2 00:02:28.548 [603/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:28.548 [604/722] Linking static target lib/librte_pdcp.a 00:02:28.548 [605/722] Linking target lib/librte_power.so.24.2 00:02:28.811 [606/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:28.811 [607/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:28.811 [608/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:28.811 [609/722] Linking target lib/librte_bitratestats.so.24.2 00:02:28.812 [610/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:28.812 [611/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:28.812 [612/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:28.812 [613/722] Linking target lib/librte_latencystats.so.24.2 00:02:28.812 [614/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:28.812 [615/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:28.812 [616/722] Linking target lib/librte_dispatcher.so.24.2 00:02:28.812 [617/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:28.812 [618/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:29.076 [619/722] Linking target lib/librte_pdump.so.24.2 00:02:29.076 [620/722] Linking target lib/librte_graph.so.24.2 00:02:29.076 [621/722] Linking target lib/librte_port.so.24.2 00:02:29.076 [622/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:29.076 [623/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:29.076 [624/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.076 [625/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:29.338 [626/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:29.338 [627/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:29.338 [628/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:29.338 [629/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:29.338 [630/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:29.338 [631/722] Linking target lib/librte_pdcp.so.24.2 00:02:29.338 [632/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:29.338 [633/722] Linking target lib/librte_table.so.24.2 00:02:29.338 [634/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:29.338 [635/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:29.599 [636/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:29.599 [637/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:29.599 [638/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:29.599 [639/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:29.860 [640/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:29.860 [641/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:30.119 [642/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:30.119 [643/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:30.119 [644/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:30.379 [645/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:30.379 [646/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:30.379 [647/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:30.379 [648/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:30.379 [649/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:30.638 [650/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:30.638 [651/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:30.638 [652/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:30.638 [653/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:30.897 [654/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:30.897 [655/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:30.897 [656/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:30.897 [657/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:31.157 [658/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:31.157 [659/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:31.157 [660/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:31.157 [661/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:31.157 [662/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:31.416 [663/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:31.416 [664/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:31.416 [665/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:31.416 [666/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:31.416 [667/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:31.416 [668/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:31.675 [669/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:31.675 [670/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:31.675 [671/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:31.933 [672/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:31.933 [673/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.933 [674/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.933 [675/722] Linking static target drivers/librte_net_i40e.a 00:02:32.192 [676/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:32.451 [677/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:32.451 [678/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.451 [679/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:32.710 [680/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:32.710 [681/722] Linking target drivers/librte_net_i40e.so.24.2 00:02:32.710 [682/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:32.968 [683/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:32.969 [684/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:32.969 [685/722] Linking static target lib/librte_node.a 00:02:33.227 [686/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.486 [687/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:33.486 [688/722] Linking target lib/librte_node.so.24.2 00:02:34.862 [689/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:35.120 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:35.120 [691/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:36.494 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:36.752 [693/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:44.897 [694/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:16.968 [695/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:16.968 [696/722] Linking static target lib/librte_vhost.a 00:03:16.968 [697/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.968 [698/722] Linking target lib/librte_vhost.so.24.2 00:03:26.946 [699/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:26.946 [700/722] Linking static target lib/librte_pipeline.a 00:03:27.883 [701/722] Linking target app/dpdk-dumpcap 00:03:27.883 [702/722] Linking target app/dpdk-test-gpudev 00:03:27.883 [703/722] Linking target app/dpdk-test-fib 00:03:27.883 [704/722] Linking target app/dpdk-test-bbdev 00:03:27.883 [705/722] Linking target app/dpdk-test-dma-perf 00:03:27.883 [706/722] Linking target app/dpdk-test-pipeline 00:03:27.883 [707/722] Linking target app/dpdk-test-acl 00:03:27.883 [708/722] Linking target app/dpdk-test-crypto-perf 00:03:27.883 [709/722] Linking target app/dpdk-pdump 00:03:27.883 [710/722] Linking target app/dpdk-test-flow-perf 00:03:27.883 [711/722] Linking target app/dpdk-test-cmdline 00:03:27.883 [712/722] Linking target app/dpdk-test-sad 00:03:27.883 [713/722] Linking target app/dpdk-test-regex 00:03:27.883 [714/722] Linking target app/dpdk-test-security-perf 00:03:27.883 [715/722] Linking target app/dpdk-proc-info 00:03:27.883 [716/722] Linking target app/dpdk-test-mldev 00:03:27.883 [717/722] Linking target app/dpdk-graph 00:03:27.883 [718/722] Linking target app/dpdk-test-eventdev 00:03:27.883 [719/722] Linking target app/dpdk-test-compress-perf 00:03:27.883 [720/722] Linking target app/dpdk-testpmd 00:03:29.787 [721/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.787 [722/722] Linking target lib/librte_pipeline.so.24.2 00:03:29.787 04:59:06 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:29.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:29.787 [0/1] Installing files. 00:03:30.048 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.049 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.050 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.313 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.317 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.317 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.890 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:30.891 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:30.891 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:30.891 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.891 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:30.891 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:30.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.156 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:31.156 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:31.156 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:31.156 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:31.156 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:03:31.156 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:03:31.156 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:31.156 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:31.156 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:31.156 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:31.156 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:31.156 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:31.156 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:31.156 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:31.156 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:31.156 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:31.156 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:31.156 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:31.156 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:31.156 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:31.156 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:31.156 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:31.156 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:31.156 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:31.156 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:31.156 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:31.156 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:31.156 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:31.156 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:31.156 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:31.156 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:31.156 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:31.156 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:31.157 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:31.157 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:31.157 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:31.157 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:31.157 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:31.157 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:31.157 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:31.157 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:31.157 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:31.157 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:31.157 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:31.157 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:31.157 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:31.157 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:31.157 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:31.157 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:31.157 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:31.157 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:31.157 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:31.157 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:31.157 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:31.157 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:31.157 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:31.157 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:31.157 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:31.157 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:31.157 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:31.157 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:31.157 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:31.157 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:31.157 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:31.157 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:31.157 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:31.157 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:31.157 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:31.157 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:31.157 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:31.157 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:31.157 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:31.157 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:31.157 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:31.157 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:31.157 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:31.157 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:31.157 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:31.157 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:31.157 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:31.157 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:31.157 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:31.157 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:31.157 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:31.157 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:31.157 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:31.157 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:31.157 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:31.157 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:31.157 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:31.157 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:31.157 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:31.157 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:31.157 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:31.157 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:31.157 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:31.157 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:31.157 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:31.157 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:31.157 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:31.157 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:31.157 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:31.157 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:31.157 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:31.157 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:31.157 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:31.157 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:31.157 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:31.157 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:31.157 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:31.157 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:31.157 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:31.157 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:31.157 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:31.157 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:31.157 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:31.157 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:31.157 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:31.157 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:31.157 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:31.157 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:31.157 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:31.157 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:31.157 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:31.157 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:31.157 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:31.157 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:31.157 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:31.157 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:31.158 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:31.158 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:31.158 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:31.158 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:31.158 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:31.158 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:31.158 04:59:08 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:31.158 04:59:08 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:31.158 04:59:08 -- common/autobuild_common.sh@200 -- $ cat 00:03:31.158 04:59:08 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.158 00:03:31.158 real 1m28.129s 00:03:31.158 user 18m18.444s 00:03:31.158 sys 2m9.537s 00:03:31.158 04:59:08 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:31.158 04:59:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.158 ************************************ 00:03:31.158 END TEST build_native_dpdk 00:03:31.158 ************************************ 00:03:31.158 04:59:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:31.158 04:59:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:31.158 04:59:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:31.158 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:31.418 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.418 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.418 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:31.678 Using 'verbs' RDMA provider 00:03:42.230 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:50.341 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:50.341 Creating mk/config.mk...done. 00:03:50.341 Creating mk/cc.flags.mk...done. 00:03:50.341 Type 'make' to build. 00:03:50.341 04:59:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:50.341 04:59:27 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:50.341 04:59:27 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:50.341 04:59:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.341 ************************************ 00:03:50.341 START TEST make 00:03:50.341 ************************************ 00:03:50.341 04:59:27 -- common/autotest_common.sh@1111 -- $ make -j48 00:03:50.600 make[1]: Nothing to be done for 'all'. 00:03:52.557 The Meson build system 00:03:52.557 Version: 1.3.1 00:03:52.557 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:52.557 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:52.557 Build type: native build 00:03:52.557 Project name: libvfio-user 00:03:52.557 Project version: 0.0.1 00:03:52.557 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:52.557 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:52.557 Host machine cpu family: x86_64 00:03:52.557 Host machine cpu: x86_64 00:03:52.557 Run-time dependency threads found: YES 00:03:52.557 Library dl found: YES 00:03:52.557 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:52.558 Run-time dependency json-c found: YES 0.17 00:03:52.558 Run-time dependency cmocka found: YES 1.1.7 00:03:52.558 Program pytest-3 found: NO 00:03:52.558 Program flake8 found: NO 00:03:52.558 Program misspell-fixer found: NO 00:03:52.558 Program restructuredtext-lint found: NO 00:03:52.558 Program valgrind found: YES (/usr/bin/valgrind) 00:03:52.558 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:52.558 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:52.558 Compiler for C supports arguments -Wwrite-strings: YES 00:03:52.558 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:52.558 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:52.558 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:52.558 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:52.558 Build targets in project: 8 00:03:52.558 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:52.558 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:52.558 00:03:52.558 libvfio-user 0.0.1 00:03:52.558 00:03:52.558 User defined options 00:03:52.558 buildtype : debug 00:03:52.558 default_library: shared 00:03:52.558 libdir : /usr/local/lib 00:03:52.558 00:03:52.558 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.141 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:53.141 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:53.141 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:53.141 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:53.141 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:53.404 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:53.404 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:53.404 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:53.404 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:53.404 [9/37] Compiling C object samples/null.p/null.c.o 00:03:53.404 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:53.404 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:53.404 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:53.404 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:53.404 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:53.404 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:53.404 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:53.404 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:53.404 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:53.404 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:53.404 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:53.404 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:53.404 [22/37] Compiling C object samples/server.p/server.c.o 00:03:53.404 [23/37] Compiling C object samples/client.p/client.c.o 00:03:53.404 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:53.404 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:53.404 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:53.404 [27/37] Linking target samples/client 00:03:53.668 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:53.669 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:53.669 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:53.669 [31/37] Linking target test/unit_tests 00:03:53.931 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:53.931 [33/37] Linking target samples/null 00:03:53.931 [34/37] Linking target samples/gpio-pci-idio-16 00:03:53.931 [35/37] Linking target samples/server 00:03:53.931 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:53.931 [37/37] Linking target samples/lspci 00:03:53.931 INFO: autodetecting backend as ninja 00:03:53.931 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:53.931 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:54.878 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:54.878 ninja: no work to do. 00:04:07.075 CC lib/ut/ut.o 00:04:07.075 CC lib/ut_mock/mock.o 00:04:07.075 CC lib/log/log.o 00:04:07.075 CC lib/log/log_flags.o 00:04:07.075 CC lib/log/log_deprecated.o 00:04:07.075 LIB libspdk_ut_mock.a 00:04:07.075 SO libspdk_ut_mock.so.6.0 00:04:07.075 LIB libspdk_log.a 00:04:07.075 LIB libspdk_ut.a 00:04:07.075 SO libspdk_ut.so.2.0 00:04:07.075 SO libspdk_log.so.7.0 00:04:07.075 SYMLINK libspdk_ut_mock.so 00:04:07.075 SYMLINK libspdk_ut.so 00:04:07.075 SYMLINK libspdk_log.so 00:04:07.075 CC lib/ioat/ioat.o 00:04:07.075 CC lib/dma/dma.o 00:04:07.075 CXX lib/trace_parser/trace.o 00:04:07.075 CC lib/util/base64.o 00:04:07.075 CC lib/util/bit_array.o 00:04:07.075 CC lib/util/cpuset.o 00:04:07.075 CC lib/util/crc16.o 00:04:07.075 CC lib/util/crc32.o 00:04:07.075 CC lib/util/crc32c.o 00:04:07.075 CC lib/util/crc32_ieee.o 00:04:07.075 CC lib/util/crc64.o 00:04:07.075 CC lib/util/dif.o 00:04:07.075 CC lib/util/fd.o 00:04:07.075 CC lib/util/file.o 00:04:07.075 CC lib/util/hexlify.o 00:04:07.075 CC lib/util/iov.o 00:04:07.075 CC lib/util/math.o 00:04:07.075 CC lib/util/pipe.o 00:04:07.075 CC lib/util/strerror_tls.o 00:04:07.075 CC lib/util/string.o 00:04:07.075 CC lib/util/uuid.o 00:04:07.075 CC lib/util/fd_group.o 00:04:07.075 CC lib/util/xor.o 00:04:07.075 CC lib/util/zipf.o 00:04:07.075 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.075 CC lib/vfio_user/host/vfio_user.o 00:04:07.075 LIB libspdk_dma.a 00:04:07.075 SO libspdk_dma.so.4.0 00:04:07.075 SYMLINK libspdk_dma.so 00:04:07.075 LIB libspdk_ioat.a 00:04:07.075 SO libspdk_ioat.so.7.0 00:04:07.075 LIB libspdk_vfio_user.a 00:04:07.075 SYMLINK libspdk_ioat.so 00:04:07.076 SO libspdk_vfio_user.so.5.0 00:04:07.076 SYMLINK libspdk_vfio_user.so 00:04:07.076 LIB libspdk_util.a 00:04:07.076 SO libspdk_util.so.9.0 00:04:07.076 SYMLINK libspdk_util.so 00:04:07.076 CC lib/env_dpdk/env.o 00:04:07.076 CC lib/idxd/idxd.o 00:04:07.076 CC lib/vmd/vmd.o 00:04:07.076 CC lib/rdma/common.o 00:04:07.076 CC lib/conf/conf.o 00:04:07.076 CC lib/json/json_parse.o 00:04:07.076 CC lib/env_dpdk/memory.o 00:04:07.076 CC lib/idxd/idxd_user.o 00:04:07.076 CC lib/vmd/led.o 00:04:07.076 CC lib/json/json_util.o 00:04:07.076 CC lib/env_dpdk/pci.o 00:04:07.076 CC lib/rdma/rdma_verbs.o 00:04:07.076 CC lib/json/json_write.o 00:04:07.076 CC lib/env_dpdk/init.o 00:04:07.076 CC lib/env_dpdk/threads.o 00:04:07.076 CC lib/env_dpdk/pci_ioat.o 00:04:07.076 CC lib/env_dpdk/pci_virtio.o 00:04:07.076 CC lib/env_dpdk/pci_vmd.o 00:04:07.076 CC lib/env_dpdk/pci_idxd.o 00:04:07.076 CC lib/env_dpdk/pci_event.o 00:04:07.076 CC lib/env_dpdk/sigbus_handler.o 00:04:07.076 CC lib/env_dpdk/pci_dpdk.o 00:04:07.076 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:07.076 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:07.076 LIB libspdk_trace_parser.a 00:04:07.334 SO libspdk_trace_parser.so.5.0 00:04:07.334 SYMLINK libspdk_trace_parser.so 00:04:07.334 LIB libspdk_conf.a 00:04:07.334 SO libspdk_conf.so.6.0 00:04:07.334 LIB libspdk_rdma.a 00:04:07.593 LIB libspdk_json.a 00:04:07.593 SYMLINK libspdk_conf.so 00:04:07.593 SO libspdk_rdma.so.6.0 00:04:07.593 SO libspdk_json.so.6.0 00:04:07.593 SYMLINK libspdk_rdma.so 00:04:07.593 SYMLINK libspdk_json.so 00:04:07.593 CC lib/jsonrpc/jsonrpc_server.o 00:04:07.593 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:07.593 CC lib/jsonrpc/jsonrpc_client.o 00:04:07.593 LIB libspdk_idxd.a 00:04:07.593 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:07.851 SO libspdk_idxd.so.12.0 00:04:07.851 SYMLINK libspdk_idxd.so 00:04:07.851 LIB libspdk_vmd.a 00:04:07.851 SO libspdk_vmd.so.6.0 00:04:07.851 SYMLINK libspdk_vmd.so 00:04:08.109 LIB libspdk_jsonrpc.a 00:04:08.109 SO libspdk_jsonrpc.so.6.0 00:04:08.109 SYMLINK libspdk_jsonrpc.so 00:04:08.367 CC lib/rpc/rpc.o 00:04:08.367 LIB libspdk_rpc.a 00:04:08.625 SO libspdk_rpc.so.6.0 00:04:08.625 SYMLINK libspdk_rpc.so 00:04:08.625 CC lib/notify/notify.o 00:04:08.625 CC lib/trace/trace.o 00:04:08.625 CC lib/notify/notify_rpc.o 00:04:08.625 CC lib/trace/trace_flags.o 00:04:08.625 CC lib/keyring/keyring.o 00:04:08.625 CC lib/trace/trace_rpc.o 00:04:08.625 CC lib/keyring/keyring_rpc.o 00:04:08.884 LIB libspdk_notify.a 00:04:08.884 SO libspdk_notify.so.6.0 00:04:08.884 LIB libspdk_trace.a 00:04:08.884 LIB libspdk_keyring.a 00:04:08.884 SYMLINK libspdk_notify.so 00:04:08.884 SO libspdk_keyring.so.1.0 00:04:08.884 SO libspdk_trace.so.10.0 00:04:09.143 SYMLINK libspdk_keyring.so 00:04:09.143 SYMLINK libspdk_trace.so 00:04:09.143 LIB libspdk_env_dpdk.a 00:04:09.143 CC lib/sock/sock.o 00:04:09.143 CC lib/sock/sock_rpc.o 00:04:09.143 CC lib/thread/thread.o 00:04:09.143 CC lib/thread/iobuf.o 00:04:09.143 SO libspdk_env_dpdk.so.14.0 00:04:09.401 SYMLINK libspdk_env_dpdk.so 00:04:09.660 LIB libspdk_sock.a 00:04:09.660 SO libspdk_sock.so.9.0 00:04:09.660 SYMLINK libspdk_sock.so 00:04:09.918 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:09.918 CC lib/nvme/nvme_ctrlr.o 00:04:09.918 CC lib/nvme/nvme_fabric.o 00:04:09.918 CC lib/nvme/nvme_ns_cmd.o 00:04:09.918 CC lib/nvme/nvme_ns.o 00:04:09.918 CC lib/nvme/nvme_pcie_common.o 00:04:09.918 CC lib/nvme/nvme_pcie.o 00:04:09.918 CC lib/nvme/nvme_qpair.o 00:04:09.918 CC lib/nvme/nvme.o 00:04:09.918 CC lib/nvme/nvme_quirks.o 00:04:09.918 CC lib/nvme/nvme_transport.o 00:04:09.918 CC lib/nvme/nvme_discovery.o 00:04:09.918 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:09.918 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:09.918 CC lib/nvme/nvme_tcp.o 00:04:09.918 CC lib/nvme/nvme_opal.o 00:04:09.918 CC lib/nvme/nvme_io_msg.o 00:04:09.919 CC lib/nvme/nvme_poll_group.o 00:04:09.919 CC lib/nvme/nvme_zns.o 00:04:09.919 CC lib/nvme/nvme_stubs.o 00:04:09.919 CC lib/nvme/nvme_auth.o 00:04:09.919 CC lib/nvme/nvme_cuse.o 00:04:09.919 CC lib/nvme/nvme_vfio_user.o 00:04:09.919 CC lib/nvme/nvme_rdma.o 00:04:10.855 LIB libspdk_thread.a 00:04:10.855 SO libspdk_thread.so.10.0 00:04:10.855 SYMLINK libspdk_thread.so 00:04:11.113 CC lib/vfu_tgt/tgt_endpoint.o 00:04:11.113 CC lib/blob/blobstore.o 00:04:11.113 CC lib/init/json_config.o 00:04:11.113 CC lib/accel/accel.o 00:04:11.113 CC lib/virtio/virtio.o 00:04:11.113 CC lib/vfu_tgt/tgt_rpc.o 00:04:11.113 CC lib/blob/request.o 00:04:11.113 CC lib/init/subsystem.o 00:04:11.113 CC lib/virtio/virtio_vhost_user.o 00:04:11.113 CC lib/accel/accel_rpc.o 00:04:11.113 CC lib/init/subsystem_rpc.o 00:04:11.113 CC lib/blob/zeroes.o 00:04:11.113 CC lib/virtio/virtio_vfio_user.o 00:04:11.113 CC lib/accel/accel_sw.o 00:04:11.113 CC lib/blob/blob_bs_dev.o 00:04:11.113 CC lib/virtio/virtio_pci.o 00:04:11.113 CC lib/init/rpc.o 00:04:11.372 LIB libspdk_init.a 00:04:11.372 SO libspdk_init.so.5.0 00:04:11.372 LIB libspdk_virtio.a 00:04:11.372 SYMLINK libspdk_init.so 00:04:11.372 SO libspdk_virtio.so.7.0 00:04:11.372 LIB libspdk_vfu_tgt.a 00:04:11.372 SO libspdk_vfu_tgt.so.3.0 00:04:11.372 SYMLINK libspdk_virtio.so 00:04:11.372 SYMLINK libspdk_vfu_tgt.so 00:04:11.638 CC lib/event/app.o 00:04:11.638 CC lib/event/reactor.o 00:04:11.638 CC lib/event/log_rpc.o 00:04:11.638 CC lib/event/app_rpc.o 00:04:11.638 CC lib/event/scheduler_static.o 00:04:11.904 LIB libspdk_event.a 00:04:11.904 SO libspdk_event.so.13.0 00:04:12.163 SYMLINK libspdk_event.so 00:04:12.163 LIB libspdk_accel.a 00:04:12.163 SO libspdk_accel.so.15.0 00:04:12.163 SYMLINK libspdk_accel.so 00:04:12.163 LIB libspdk_nvme.a 00:04:12.420 SO libspdk_nvme.so.13.0 00:04:12.420 CC lib/bdev/bdev.o 00:04:12.420 CC lib/bdev/bdev_rpc.o 00:04:12.420 CC lib/bdev/bdev_zone.o 00:04:12.420 CC lib/bdev/part.o 00:04:12.420 CC lib/bdev/scsi_nvme.o 00:04:12.677 SYMLINK libspdk_nvme.so 00:04:14.050 LIB libspdk_blob.a 00:04:14.050 SO libspdk_blob.so.11.0 00:04:14.050 SYMLINK libspdk_blob.so 00:04:14.050 CC lib/lvol/lvol.o 00:04:14.050 CC lib/blobfs/blobfs.o 00:04:14.050 CC lib/blobfs/tree.o 00:04:14.984 LIB libspdk_bdev.a 00:04:14.984 SO libspdk_bdev.so.15.0 00:04:14.984 LIB libspdk_blobfs.a 00:04:14.984 SO libspdk_blobfs.so.10.0 00:04:14.984 LIB libspdk_lvol.a 00:04:14.984 SYMLINK libspdk_bdev.so 00:04:14.984 SYMLINK libspdk_blobfs.so 00:04:14.984 SO libspdk_lvol.so.10.0 00:04:14.984 SYMLINK libspdk_lvol.so 00:04:15.253 CC lib/nvmf/ctrlr.o 00:04:15.253 CC lib/scsi/dev.o 00:04:15.253 CC lib/nbd/nbd.o 00:04:15.253 CC lib/ublk/ublk.o 00:04:15.253 CC lib/nvmf/ctrlr_discovery.o 00:04:15.253 CC lib/scsi/lun.o 00:04:15.253 CC lib/ftl/ftl_core.o 00:04:15.253 CC lib/nbd/nbd_rpc.o 00:04:15.253 CC lib/ublk/ublk_rpc.o 00:04:15.253 CC lib/nvmf/ctrlr_bdev.o 00:04:15.253 CC lib/ftl/ftl_init.o 00:04:15.253 CC lib/scsi/port.o 00:04:15.253 CC lib/nvmf/subsystem.o 00:04:15.253 CC lib/ftl/ftl_layout.o 00:04:15.253 CC lib/scsi/scsi.o 00:04:15.253 CC lib/nvmf/nvmf.o 00:04:15.253 CC lib/ftl/ftl_debug.o 00:04:15.253 CC lib/scsi/scsi_bdev.o 00:04:15.253 CC lib/ftl/ftl_io.o 00:04:15.253 CC lib/nvmf/nvmf_rpc.o 00:04:15.253 CC lib/scsi/scsi_rpc.o 00:04:15.253 CC lib/ftl/ftl_sb.o 00:04:15.253 CC lib/scsi/scsi_pr.o 00:04:15.253 CC lib/nvmf/transport.o 00:04:15.253 CC lib/nvmf/tcp.o 00:04:15.253 CC lib/ftl/ftl_l2p.o 00:04:15.253 CC lib/scsi/task.o 00:04:15.253 CC lib/nvmf/vfio_user.o 00:04:15.253 CC lib/ftl/ftl_l2p_flat.o 00:04:15.253 CC lib/nvmf/rdma.o 00:04:15.253 CC lib/ftl/ftl_nv_cache.o 00:04:15.253 CC lib/ftl/ftl_band.o 00:04:15.253 CC lib/ftl/ftl_band_ops.o 00:04:15.253 CC lib/ftl/ftl_writer.o 00:04:15.253 CC lib/ftl/ftl_rq.o 00:04:15.253 CC lib/ftl/ftl_reloc.o 00:04:15.253 CC lib/ftl/ftl_l2p_cache.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.253 CC lib/ftl/ftl_p2l.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:15.253 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:15.512 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:15.512 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:15.512 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.512 CC lib/ftl/utils/ftl_conf.o 00:04:15.512 CC lib/ftl/utils/ftl_md.o 00:04:15.512 CC lib/ftl/utils/ftl_mempool.o 00:04:15.512 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.512 CC lib/ftl/utils/ftl_property.o 00:04:15.512 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.512 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.512 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.771 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.771 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.771 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.771 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.771 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.771 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.771 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.771 CC lib/ftl/base/ftl_base_dev.o 00:04:15.771 CC lib/ftl/ftl_trace.o 00:04:15.771 CC lib/ftl/base/ftl_base_bdev.o 00:04:16.029 LIB libspdk_nbd.a 00:04:16.029 SO libspdk_nbd.so.7.0 00:04:16.029 SYMLINK libspdk_nbd.so 00:04:16.029 LIB libspdk_scsi.a 00:04:16.029 SO libspdk_scsi.so.9.0 00:04:16.287 SYMLINK libspdk_scsi.so 00:04:16.287 LIB libspdk_ublk.a 00:04:16.287 SO libspdk_ublk.so.3.0 00:04:16.287 SYMLINK libspdk_ublk.so 00:04:16.287 CC lib/iscsi/conn.o 00:04:16.287 CC lib/vhost/vhost.o 00:04:16.287 CC lib/vhost/vhost_rpc.o 00:04:16.287 CC lib/iscsi/init_grp.o 00:04:16.287 CC lib/iscsi/iscsi.o 00:04:16.287 CC lib/vhost/vhost_scsi.o 00:04:16.287 CC lib/iscsi/md5.o 00:04:16.287 CC lib/vhost/vhost_blk.o 00:04:16.287 CC lib/vhost/rte_vhost_user.o 00:04:16.287 CC lib/iscsi/param.o 00:04:16.287 CC lib/iscsi/portal_grp.o 00:04:16.287 CC lib/iscsi/tgt_node.o 00:04:16.287 CC lib/iscsi/iscsi_subsystem.o 00:04:16.287 CC lib/iscsi/iscsi_rpc.o 00:04:16.288 CC lib/iscsi/task.o 00:04:16.546 LIB libspdk_ftl.a 00:04:16.546 SO libspdk_ftl.so.9.0 00:04:17.112 SYMLINK libspdk_ftl.so 00:04:17.679 LIB libspdk_vhost.a 00:04:17.679 SO libspdk_vhost.so.8.0 00:04:17.679 SYMLINK libspdk_vhost.so 00:04:17.679 LIB libspdk_nvmf.a 00:04:17.938 SO libspdk_nvmf.so.18.0 00:04:17.938 LIB libspdk_iscsi.a 00:04:17.938 SO libspdk_iscsi.so.8.0 00:04:17.938 SYMLINK libspdk_nvmf.so 00:04:17.938 SYMLINK libspdk_iscsi.so 00:04:18.196 CC module/vfu_device/vfu_virtio.o 00:04:18.196 CC module/env_dpdk/env_dpdk_rpc.o 00:04:18.196 CC module/vfu_device/vfu_virtio_blk.o 00:04:18.196 CC module/vfu_device/vfu_virtio_scsi.o 00:04:18.196 CC module/vfu_device/vfu_virtio_rpc.o 00:04:18.454 CC module/blob/bdev/blob_bdev.o 00:04:18.454 CC module/keyring/file/keyring.o 00:04:18.454 CC module/keyring/file/keyring_rpc.o 00:04:18.454 CC module/accel/error/accel_error.o 00:04:18.454 CC module/accel/iaa/accel_iaa.o 00:04:18.454 CC module/accel/iaa/accel_iaa_rpc.o 00:04:18.454 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.454 CC module/accel/error/accel_error_rpc.o 00:04:18.454 CC module/accel/dsa/accel_dsa.o 00:04:18.454 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:18.454 CC module/accel/ioat/accel_ioat.o 00:04:18.454 CC module/accel/ioat/accel_ioat_rpc.o 00:04:18.454 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.454 CC module/accel/dsa/accel_dsa_rpc.o 00:04:18.454 CC module/sock/posix/posix.o 00:04:18.454 LIB libspdk_env_dpdk_rpc.a 00:04:18.454 SO libspdk_env_dpdk_rpc.so.6.0 00:04:18.454 SYMLINK libspdk_env_dpdk_rpc.so 00:04:18.454 LIB libspdk_keyring_file.a 00:04:18.454 LIB libspdk_scheduler_dpdk_governor.a 00:04:18.454 LIB libspdk_scheduler_gscheduler.a 00:04:18.713 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:18.713 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.713 SO libspdk_keyring_file.so.1.0 00:04:18.713 LIB libspdk_accel_error.a 00:04:18.713 LIB libspdk_accel_ioat.a 00:04:18.713 LIB libspdk_scheduler_dynamic.a 00:04:18.713 LIB libspdk_accel_iaa.a 00:04:18.713 SO libspdk_accel_error.so.2.0 00:04:18.713 SO libspdk_scheduler_dynamic.so.4.0 00:04:18.713 SO libspdk_accel_ioat.so.6.0 00:04:18.713 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.713 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:18.713 SYMLINK libspdk_keyring_file.so 00:04:18.713 SO libspdk_accel_iaa.so.3.0 00:04:18.713 LIB libspdk_accel_dsa.a 00:04:18.713 LIB libspdk_blob_bdev.a 00:04:18.713 SYMLINK libspdk_scheduler_dynamic.so 00:04:18.713 SO libspdk_accel_dsa.so.5.0 00:04:18.713 SYMLINK libspdk_accel_error.so 00:04:18.713 SYMLINK libspdk_accel_ioat.so 00:04:18.713 SO libspdk_blob_bdev.so.11.0 00:04:18.713 SYMLINK libspdk_accel_iaa.so 00:04:18.713 SYMLINK libspdk_accel_dsa.so 00:04:18.713 SYMLINK libspdk_blob_bdev.so 00:04:18.972 LIB libspdk_vfu_device.a 00:04:18.972 SO libspdk_vfu_device.so.3.0 00:04:18.972 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.972 CC module/bdev/malloc/bdev_malloc.o 00:04:18.972 CC module/bdev/delay/vbdev_delay.o 00:04:18.972 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.972 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.972 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.972 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.972 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.972 CC module/bdev/aio/bdev_aio.o 00:04:18.972 CC module/bdev/gpt/gpt.o 00:04:18.972 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.972 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.972 CC module/bdev/null/bdev_null.o 00:04:18.972 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.972 CC module/bdev/split/vbdev_split.o 00:04:18.972 CC module/bdev/error/vbdev_error.o 00:04:18.972 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.972 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.972 CC module/bdev/aio/bdev_aio_rpc.o 00:04:18.972 CC module/bdev/null/bdev_null_rpc.o 00:04:18.972 CC module/bdev/raid/bdev_raid.o 00:04:18.972 CC module/bdev/split/vbdev_split_rpc.o 00:04:18.972 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:18.972 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:18.972 CC module/bdev/raid/bdev_raid_rpc.o 00:04:18.972 CC module/bdev/iscsi/bdev_iscsi.o 00:04:18.972 CC module/bdev/ftl/bdev_ftl.o 00:04:18.972 CC module/bdev/raid/bdev_raid_sb.o 00:04:18.972 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:18.973 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:18.973 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:18.973 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:18.973 CC module/bdev/raid/raid0.o 00:04:18.973 CC module/bdev/raid/raid1.o 00:04:18.973 CC module/bdev/nvme/bdev_nvme.o 00:04:18.973 CC module/bdev/raid/concat.o 00:04:18.973 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.973 CC module/bdev/nvme/nvme_rpc.o 00:04:18.973 CC module/bdev/nvme/bdev_mdns_client.o 00:04:18.973 CC module/bdev/nvme/vbdev_opal.o 00:04:18.973 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:18.973 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.235 SYMLINK libspdk_vfu_device.so 00:04:19.236 LIB libspdk_sock_posix.a 00:04:19.236 SO libspdk_sock_posix.so.6.0 00:04:19.496 LIB libspdk_blobfs_bdev.a 00:04:19.496 SYMLINK libspdk_sock_posix.so 00:04:19.496 SO libspdk_blobfs_bdev.so.6.0 00:04:19.496 LIB libspdk_bdev_split.a 00:04:19.496 SYMLINK libspdk_blobfs_bdev.so 00:04:19.496 LIB libspdk_bdev_ftl.a 00:04:19.496 SO libspdk_bdev_split.so.6.0 00:04:19.496 SO libspdk_bdev_ftl.so.6.0 00:04:19.496 LIB libspdk_bdev_null.a 00:04:19.496 LIB libspdk_bdev_gpt.a 00:04:19.496 LIB libspdk_bdev_error.a 00:04:19.496 SO libspdk_bdev_gpt.so.6.0 00:04:19.496 SO libspdk_bdev_null.so.6.0 00:04:19.496 SYMLINK libspdk_bdev_split.so 00:04:19.496 SYMLINK libspdk_bdev_ftl.so 00:04:19.496 LIB libspdk_bdev_passthru.a 00:04:19.496 LIB libspdk_bdev_delay.a 00:04:19.496 SO libspdk_bdev_error.so.6.0 00:04:19.496 LIB libspdk_bdev_aio.a 00:04:19.496 SO libspdk_bdev_passthru.so.6.0 00:04:19.496 LIB libspdk_bdev_iscsi.a 00:04:19.496 SO libspdk_bdev_delay.so.6.0 00:04:19.496 SYMLINK libspdk_bdev_null.so 00:04:19.496 SYMLINK libspdk_bdev_gpt.so 00:04:19.496 LIB libspdk_bdev_zone_block.a 00:04:19.496 SO libspdk_bdev_aio.so.6.0 00:04:19.496 LIB libspdk_bdev_malloc.a 00:04:19.496 SO libspdk_bdev_iscsi.so.6.0 00:04:19.755 SYMLINK libspdk_bdev_error.so 00:04:19.755 SO libspdk_bdev_zone_block.so.6.0 00:04:19.755 SO libspdk_bdev_malloc.so.6.0 00:04:19.755 SYMLINK libspdk_bdev_passthru.so 00:04:19.755 SYMLINK libspdk_bdev_delay.so 00:04:19.755 SYMLINK libspdk_bdev_aio.so 00:04:19.755 SYMLINK libspdk_bdev_iscsi.so 00:04:19.755 SYMLINK libspdk_bdev_zone_block.so 00:04:19.755 SYMLINK libspdk_bdev_malloc.so 00:04:19.755 LIB libspdk_bdev_lvol.a 00:04:19.755 LIB libspdk_bdev_virtio.a 00:04:19.755 SO libspdk_bdev_lvol.so.6.0 00:04:19.755 SO libspdk_bdev_virtio.so.6.0 00:04:19.755 SYMLINK libspdk_bdev_virtio.so 00:04:19.755 SYMLINK libspdk_bdev_lvol.so 00:04:20.321 LIB libspdk_bdev_raid.a 00:04:20.321 SO libspdk_bdev_raid.so.6.0 00:04:20.321 SYMLINK libspdk_bdev_raid.so 00:04:21.256 LIB libspdk_bdev_nvme.a 00:04:21.514 SO libspdk_bdev_nvme.so.7.0 00:04:21.514 SYMLINK libspdk_bdev_nvme.so 00:04:21.773 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.773 CC module/event/subsystems/sock/sock.o 00:04:21.773 CC module/event/subsystems/vmd/vmd.o 00:04:21.773 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:21.773 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.773 CC module/event/subsystems/keyring/keyring.o 00:04:21.773 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.773 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.773 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:22.031 LIB libspdk_event_sock.a 00:04:22.031 LIB libspdk_event_keyring.a 00:04:22.031 LIB libspdk_event_scheduler.a 00:04:22.031 LIB libspdk_event_vfu_tgt.a 00:04:22.031 LIB libspdk_event_vhost_blk.a 00:04:22.031 LIB libspdk_event_vmd.a 00:04:22.031 LIB libspdk_event_iobuf.a 00:04:22.031 SO libspdk_event_sock.so.5.0 00:04:22.032 SO libspdk_event_keyring.so.1.0 00:04:22.032 SO libspdk_event_vfu_tgt.so.3.0 00:04:22.032 SO libspdk_event_scheduler.so.4.0 00:04:22.032 SO libspdk_event_vhost_blk.so.3.0 00:04:22.032 SO libspdk_event_vmd.so.6.0 00:04:22.032 SO libspdk_event_iobuf.so.3.0 00:04:22.032 SYMLINK libspdk_event_keyring.so 00:04:22.032 SYMLINK libspdk_event_sock.so 00:04:22.032 SYMLINK libspdk_event_vfu_tgt.so 00:04:22.032 SYMLINK libspdk_event_scheduler.so 00:04:22.032 SYMLINK libspdk_event_vhost_blk.so 00:04:22.032 SYMLINK libspdk_event_vmd.so 00:04:22.032 SYMLINK libspdk_event_iobuf.so 00:04:22.291 CC module/event/subsystems/accel/accel.o 00:04:22.291 LIB libspdk_event_accel.a 00:04:22.550 SO libspdk_event_accel.so.6.0 00:04:22.550 SYMLINK libspdk_event_accel.so 00:04:22.550 CC module/event/subsystems/bdev/bdev.o 00:04:22.811 LIB libspdk_event_bdev.a 00:04:22.811 SO libspdk_event_bdev.so.6.0 00:04:22.811 SYMLINK libspdk_event_bdev.so 00:04:23.069 CC module/event/subsystems/ublk/ublk.o 00:04:23.069 CC module/event/subsystems/scsi/scsi.o 00:04:23.069 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.069 CC module/event/subsystems/nbd/nbd.o 00:04:23.069 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.328 LIB libspdk_event_nbd.a 00:04:23.328 LIB libspdk_event_ublk.a 00:04:23.328 SO libspdk_event_nbd.so.6.0 00:04:23.328 LIB libspdk_event_scsi.a 00:04:23.328 SO libspdk_event_ublk.so.3.0 00:04:23.328 SO libspdk_event_scsi.so.6.0 00:04:23.328 SYMLINK libspdk_event_nbd.so 00:04:23.328 SYMLINK libspdk_event_ublk.so 00:04:23.328 LIB libspdk_event_nvmf.a 00:04:23.328 SYMLINK libspdk_event_scsi.so 00:04:23.328 SO libspdk_event_nvmf.so.6.0 00:04:23.328 SYMLINK libspdk_event_nvmf.so 00:04:23.587 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:23.587 CC module/event/subsystems/iscsi/iscsi.o 00:04:23.587 LIB libspdk_event_vhost_scsi.a 00:04:23.587 SO libspdk_event_vhost_scsi.so.3.0 00:04:23.587 LIB libspdk_event_iscsi.a 00:04:23.587 SO libspdk_event_iscsi.so.6.0 00:04:23.587 SYMLINK libspdk_event_vhost_scsi.so 00:04:23.848 SYMLINK libspdk_event_iscsi.so 00:04:23.848 SO libspdk.so.6.0 00:04:23.848 SYMLINK libspdk.so 00:04:24.116 CC app/trace_record/trace_record.o 00:04:24.116 CXX app/trace/trace.o 00:04:24.116 CC app/spdk_lspci/spdk_lspci.o 00:04:24.116 CC app/spdk_nvme_perf/perf.o 00:04:24.116 CC test/rpc_client/rpc_client_test.o 00:04:24.116 TEST_HEADER include/spdk/accel.h 00:04:24.116 CC app/spdk_top/spdk_top.o 00:04:24.116 CC app/spdk_nvme_identify/identify.o 00:04:24.116 TEST_HEADER include/spdk/accel_module.h 00:04:24.116 CC app/spdk_nvme_discover/discovery_aer.o 00:04:24.116 TEST_HEADER include/spdk/assert.h 00:04:24.116 TEST_HEADER include/spdk/barrier.h 00:04:24.116 TEST_HEADER include/spdk/base64.h 00:04:24.116 TEST_HEADER include/spdk/bdev.h 00:04:24.116 TEST_HEADER include/spdk/bdev_module.h 00:04:24.116 TEST_HEADER include/spdk/bdev_zone.h 00:04:24.116 TEST_HEADER include/spdk/bit_array.h 00:04:24.116 TEST_HEADER include/spdk/bit_pool.h 00:04:24.116 TEST_HEADER include/spdk/blob_bdev.h 00:04:24.116 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:24.116 TEST_HEADER include/spdk/blobfs.h 00:04:24.116 TEST_HEADER include/spdk/blob.h 00:04:24.116 TEST_HEADER include/spdk/conf.h 00:04:24.116 TEST_HEADER include/spdk/config.h 00:04:24.116 CC app/spdk_dd/spdk_dd.o 00:04:24.116 TEST_HEADER include/spdk/cpuset.h 00:04:24.116 TEST_HEADER include/spdk/crc16.h 00:04:24.116 TEST_HEADER include/spdk/crc32.h 00:04:24.116 TEST_HEADER include/spdk/crc64.h 00:04:24.116 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:24.116 TEST_HEADER include/spdk/dif.h 00:04:24.116 TEST_HEADER include/spdk/dma.h 00:04:24.116 TEST_HEADER include/spdk/endian.h 00:04:24.116 CC app/iscsi_tgt/iscsi_tgt.o 00:04:24.116 CC app/nvmf_tgt/nvmf_main.o 00:04:24.116 TEST_HEADER include/spdk/env_dpdk.h 00:04:24.116 CC app/vhost/vhost.o 00:04:24.116 TEST_HEADER include/spdk/env.h 00:04:24.116 TEST_HEADER include/spdk/event.h 00:04:24.116 TEST_HEADER include/spdk/fd_group.h 00:04:24.116 TEST_HEADER include/spdk/fd.h 00:04:24.116 TEST_HEADER include/spdk/file.h 00:04:24.116 TEST_HEADER include/spdk/ftl.h 00:04:24.116 TEST_HEADER include/spdk/gpt_spec.h 00:04:24.116 TEST_HEADER include/spdk/hexlify.h 00:04:24.116 TEST_HEADER include/spdk/histogram_data.h 00:04:24.116 TEST_HEADER include/spdk/idxd.h 00:04:24.116 TEST_HEADER include/spdk/idxd_spec.h 00:04:24.116 TEST_HEADER include/spdk/init.h 00:04:24.116 CC app/spdk_tgt/spdk_tgt.o 00:04:24.116 CC examples/ioat/perf/perf.o 00:04:24.116 TEST_HEADER include/spdk/ioat.h 00:04:24.116 CC test/env/vtophys/vtophys.o 00:04:24.116 CC examples/sock/hello_world/hello_sock.o 00:04:24.116 TEST_HEADER include/spdk/ioat_spec.h 00:04:24.116 CC test/thread/poller_perf/poller_perf.o 00:04:24.116 CC test/app/jsoncat/jsoncat.o 00:04:24.116 CC test/event/event_perf/event_perf.o 00:04:24.116 CC examples/idxd/perf/perf.o 00:04:24.116 CC test/app/stub/stub.o 00:04:24.116 TEST_HEADER include/spdk/iscsi_spec.h 00:04:24.116 CC test/app/histogram_perf/histogram_perf.o 00:04:24.116 CC examples/vmd/led/led.o 00:04:24.116 TEST_HEADER include/spdk/json.h 00:04:24.116 CC app/fio/nvme/fio_plugin.o 00:04:24.116 TEST_HEADER include/spdk/jsonrpc.h 00:04:24.116 CC examples/accel/perf/accel_perf.o 00:04:24.116 TEST_HEADER include/spdk/keyring.h 00:04:24.116 TEST_HEADER include/spdk/keyring_module.h 00:04:24.117 TEST_HEADER include/spdk/likely.h 00:04:24.117 CC examples/vmd/lsvmd/lsvmd.o 00:04:24.117 CC test/event/reactor/reactor.o 00:04:24.117 TEST_HEADER include/spdk/log.h 00:04:24.117 CC examples/ioat/verify/verify.o 00:04:24.117 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:24.117 TEST_HEADER include/spdk/lvol.h 00:04:24.117 CC test/nvme/aer/aer.o 00:04:24.377 TEST_HEADER include/spdk/memory.h 00:04:24.377 CC examples/nvme/hello_world/hello_world.o 00:04:24.377 TEST_HEADER include/spdk/mmio.h 00:04:24.377 CC examples/util/zipf/zipf.o 00:04:24.377 TEST_HEADER include/spdk/nbd.h 00:04:24.377 TEST_HEADER include/spdk/notify.h 00:04:24.377 TEST_HEADER include/spdk/nvme.h 00:04:24.377 TEST_HEADER include/spdk/nvme_intel.h 00:04:24.377 CC examples/blob/cli/blobcli.o 00:04:24.377 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:24.377 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:24.377 TEST_HEADER include/spdk/nvme_spec.h 00:04:24.377 TEST_HEADER include/spdk/nvme_zns.h 00:04:24.377 CC examples/blob/hello_world/hello_blob.o 00:04:24.377 CC examples/thread/thread/thread_ex.o 00:04:24.377 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:24.377 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:24.377 CC test/app/bdev_svc/bdev_svc.o 00:04:24.377 CC test/blobfs/mkfs/mkfs.o 00:04:24.377 CC test/accel/dif/dif.o 00:04:24.377 TEST_HEADER include/spdk/nvmf.h 00:04:24.377 CC examples/bdev/hello_world/hello_bdev.o 00:04:24.377 CC test/dma/test_dma/test_dma.o 00:04:24.377 TEST_HEADER include/spdk/nvmf_spec.h 00:04:24.377 CC examples/nvmf/nvmf/nvmf.o 00:04:24.377 CC test/bdev/bdevio/bdevio.o 00:04:24.377 TEST_HEADER include/spdk/nvmf_transport.h 00:04:24.377 TEST_HEADER include/spdk/opal.h 00:04:24.377 TEST_HEADER include/spdk/opal_spec.h 00:04:24.377 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.377 TEST_HEADER include/spdk/pci_ids.h 00:04:24.377 TEST_HEADER include/spdk/pipe.h 00:04:24.377 TEST_HEADER include/spdk/queue.h 00:04:24.377 TEST_HEADER include/spdk/reduce.h 00:04:24.377 TEST_HEADER include/spdk/rpc.h 00:04:24.377 TEST_HEADER include/spdk/scheduler.h 00:04:24.377 TEST_HEADER include/spdk/scsi.h 00:04:24.377 TEST_HEADER include/spdk/scsi_spec.h 00:04:24.377 TEST_HEADER include/spdk/sock.h 00:04:24.377 CC test/env/mem_callbacks/mem_callbacks.o 00:04:24.377 TEST_HEADER include/spdk/stdinc.h 00:04:24.377 TEST_HEADER include/spdk/string.h 00:04:24.377 TEST_HEADER include/spdk/thread.h 00:04:24.377 TEST_HEADER include/spdk/trace.h 00:04:24.377 LINK spdk_lspci 00:04:24.377 TEST_HEADER include/spdk/trace_parser.h 00:04:24.377 TEST_HEADER include/spdk/tree.h 00:04:24.377 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:24.377 TEST_HEADER include/spdk/ublk.h 00:04:24.377 TEST_HEADER include/spdk/util.h 00:04:24.377 TEST_HEADER include/spdk/uuid.h 00:04:24.377 TEST_HEADER include/spdk/version.h 00:04:24.377 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:24.377 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:24.377 CC test/lvol/esnap/esnap.o 00:04:24.377 TEST_HEADER include/spdk/vhost.h 00:04:24.377 TEST_HEADER include/spdk/vmd.h 00:04:24.377 TEST_HEADER include/spdk/xor.h 00:04:24.377 TEST_HEADER include/spdk/zipf.h 00:04:24.377 CXX test/cpp_headers/accel.o 00:04:24.377 LINK rpc_client_test 00:04:24.640 LINK spdk_nvme_discover 00:04:24.640 LINK lsvmd 00:04:24.640 LINK jsoncat 00:04:24.640 LINK vtophys 00:04:24.640 LINK event_perf 00:04:24.640 LINK poller_perf 00:04:24.640 LINK led 00:04:24.640 LINK interrupt_tgt 00:04:24.640 LINK histogram_perf 00:04:24.640 LINK reactor 00:04:24.640 LINK nvmf_tgt 00:04:24.640 LINK vhost 00:04:24.640 LINK spdk_trace_record 00:04:24.640 LINK zipf 00:04:24.640 LINK stub 00:04:24.640 LINK iscsi_tgt 00:04:24.640 LINK env_dpdk_post_init 00:04:24.640 LINK spdk_tgt 00:04:24.640 LINK ioat_perf 00:04:24.640 LINK verify 00:04:24.640 LINK bdev_svc 00:04:24.640 LINK hello_sock 00:04:24.640 LINK hello_world 00:04:24.640 LINK mkfs 00:04:24.640 LINK hello_blob 00:04:24.897 LINK hello_bdev 00:04:24.897 CXX test/cpp_headers/accel_module.o 00:04:24.897 LINK aer 00:04:24.897 CXX test/cpp_headers/assert.o 00:04:24.897 LINK thread 00:04:24.897 CC examples/nvme/reconnect/reconnect.o 00:04:24.897 LINK spdk_dd 00:04:24.897 LINK idxd_perf 00:04:24.897 CXX test/cpp_headers/barrier.o 00:04:24.897 LINK nvmf 00:04:24.897 LINK spdk_trace 00:04:24.897 CC test/env/memory/memory_ut.o 00:04:24.897 CXX test/cpp_headers/base64.o 00:04:24.897 LINK dif 00:04:24.897 CC test/event/reactor_perf/reactor_perf.o 00:04:24.897 CC test/nvme/reset/reset.o 00:04:25.158 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.158 CC app/fio/bdev/fio_plugin.o 00:04:25.158 LINK test_dma 00:04:25.158 CC test/event/app_repeat/app_repeat.o 00:04:25.158 LINK bdevio 00:04:25.158 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:25.158 CC test/nvme/e2edp/nvme_dp.o 00:04:25.158 CC test/nvme/sgl/sgl.o 00:04:25.158 CC test/env/pci/pci_ut.o 00:04:25.158 CC test/event/scheduler/scheduler.o 00:04:25.158 CC test/nvme/overhead/overhead.o 00:04:25.158 LINK accel_perf 00:04:25.158 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.158 CC test/nvme/err_injection/err_injection.o 00:04:25.158 CC test/nvme/startup/startup.o 00:04:25.158 CC test/nvme/reserve/reserve.o 00:04:25.158 CC examples/nvme/arbitration/arbitration.o 00:04:25.158 LINK nvme_fuzz 00:04:25.158 CXX test/cpp_headers/bdev.o 00:04:25.158 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:25.158 CC test/nvme/simple_copy/simple_copy.o 00:04:25.158 LINK blobcli 00:04:25.158 CC test/nvme/connect_stress/connect_stress.o 00:04:25.158 CXX test/cpp_headers/bdev_module.o 00:04:25.158 CC examples/nvme/hotplug/hotplug.o 00:04:25.416 LINK spdk_nvme 00:04:25.416 CXX test/cpp_headers/bdev_zone.o 00:04:25.416 CC test/nvme/boot_partition/boot_partition.o 00:04:25.416 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.416 CXX test/cpp_headers/bit_array.o 00:04:25.416 CC test/nvme/compliance/nvme_compliance.o 00:04:25.416 CC test/nvme/fused_ordering/fused_ordering.o 00:04:25.416 CXX test/cpp_headers/bit_pool.o 00:04:25.416 LINK reactor_perf 00:04:25.416 LINK app_repeat 00:04:25.416 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:25.416 CC examples/nvme/abort/abort.o 00:04:25.416 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:25.416 CXX test/cpp_headers/blob_bdev.o 00:04:25.416 CXX test/cpp_headers/blobfs_bdev.o 00:04:25.416 CXX test/cpp_headers/blobfs.o 00:04:25.416 CXX test/cpp_headers/blob.o 00:04:25.416 CXX test/cpp_headers/conf.o 00:04:25.416 LINK reconnect 00:04:25.416 LINK startup 00:04:25.416 CXX test/cpp_headers/config.o 00:04:25.416 LINK reset 00:04:25.416 CC test/nvme/fdp/fdp.o 00:04:25.416 LINK err_injection 00:04:25.680 LINK mem_callbacks 00:04:25.680 CXX test/cpp_headers/cpuset.o 00:04:25.680 CC test/nvme/cuse/cuse.o 00:04:25.680 LINK scheduler 00:04:25.680 LINK reserve 00:04:25.680 LINK sgl 00:04:25.680 LINK nvme_dp 00:04:25.680 LINK connect_stress 00:04:25.680 CXX test/cpp_headers/crc16.o 00:04:25.680 CXX test/cpp_headers/crc32.o 00:04:25.680 CXX test/cpp_headers/crc64.o 00:04:25.680 LINK boot_partition 00:04:25.680 LINK overhead 00:04:25.680 LINK simple_copy 00:04:25.680 CXX test/cpp_headers/dif.o 00:04:25.680 LINK spdk_nvme_perf 00:04:25.680 CXX test/cpp_headers/dma.o 00:04:25.680 CXX test/cpp_headers/endian.o 00:04:25.680 LINK spdk_nvme_identify 00:04:25.680 LINK spdk_top 00:04:25.680 LINK cmb_copy 00:04:25.680 CXX test/cpp_headers/env_dpdk.o 00:04:25.680 CXX test/cpp_headers/env.o 00:04:25.680 LINK hotplug 00:04:25.680 LINK fused_ordering 00:04:25.680 LINK pmr_persistence 00:04:25.680 LINK doorbell_aers 00:04:25.680 CXX test/cpp_headers/event.o 00:04:25.680 CXX test/cpp_headers/fd_group.o 00:04:25.680 LINK bdevperf 00:04:25.940 LINK arbitration 00:04:25.940 CXX test/cpp_headers/fd.o 00:04:25.940 CXX test/cpp_headers/file.o 00:04:25.940 CXX test/cpp_headers/ftl.o 00:04:25.940 CXX test/cpp_headers/gpt_spec.o 00:04:25.940 LINK pci_ut 00:04:25.940 CXX test/cpp_headers/hexlify.o 00:04:25.940 CXX test/cpp_headers/histogram_data.o 00:04:25.940 CXX test/cpp_headers/idxd.o 00:04:25.940 CXX test/cpp_headers/idxd_spec.o 00:04:25.940 CXX test/cpp_headers/init.o 00:04:25.940 CXX test/cpp_headers/ioat.o 00:04:25.940 CXX test/cpp_headers/ioat_spec.o 00:04:25.940 CXX test/cpp_headers/iscsi_spec.o 00:04:25.940 CXX test/cpp_headers/json.o 00:04:25.940 CXX test/cpp_headers/jsonrpc.o 00:04:25.940 CXX test/cpp_headers/keyring.o 00:04:25.940 CXX test/cpp_headers/keyring_module.o 00:04:25.940 LINK nvme_compliance 00:04:25.940 LINK spdk_bdev 00:04:25.940 CXX test/cpp_headers/likely.o 00:04:25.940 CXX test/cpp_headers/log.o 00:04:25.940 LINK nvme_manage 00:04:25.940 CXX test/cpp_headers/lvol.o 00:04:25.940 CXX test/cpp_headers/memory.o 00:04:25.940 LINK vhost_fuzz 00:04:25.940 CXX test/cpp_headers/mmio.o 00:04:25.940 CXX test/cpp_headers/nbd.o 00:04:25.940 CXX test/cpp_headers/notify.o 00:04:25.940 CXX test/cpp_headers/nvme.o 00:04:26.201 CXX test/cpp_headers/nvme_intel.o 00:04:26.201 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.201 CXX test/cpp_headers/nvme_spec.o 00:04:26.201 CXX test/cpp_headers/nvme_zns.o 00:04:26.201 CXX test/cpp_headers/nvmf_cmd.o 00:04:26.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:26.201 LINK abort 00:04:26.201 CXX test/cpp_headers/nvmf.o 00:04:26.201 CXX test/cpp_headers/nvmf_spec.o 00:04:26.201 CXX test/cpp_headers/nvmf_transport.o 00:04:26.201 CXX test/cpp_headers/opal.o 00:04:26.201 CXX test/cpp_headers/opal_spec.o 00:04:26.201 LINK fdp 00:04:26.201 CXX test/cpp_headers/pci_ids.o 00:04:26.201 CXX test/cpp_headers/pipe.o 00:04:26.201 CXX test/cpp_headers/queue.o 00:04:26.201 CXX test/cpp_headers/reduce.o 00:04:26.201 CXX test/cpp_headers/rpc.o 00:04:26.201 CXX test/cpp_headers/scheduler.o 00:04:26.201 CXX test/cpp_headers/scsi.o 00:04:26.201 CXX test/cpp_headers/scsi_spec.o 00:04:26.201 CXX test/cpp_headers/sock.o 00:04:26.201 CXX test/cpp_headers/stdinc.o 00:04:26.201 CXX test/cpp_headers/string.o 00:04:26.201 CXX test/cpp_headers/thread.o 00:04:26.201 CXX test/cpp_headers/trace.o 00:04:26.201 CXX test/cpp_headers/trace_parser.o 00:04:26.201 CXX test/cpp_headers/tree.o 00:04:26.460 CXX test/cpp_headers/ublk.o 00:04:26.460 CXX test/cpp_headers/util.o 00:04:26.460 CXX test/cpp_headers/uuid.o 00:04:26.460 CXX test/cpp_headers/version.o 00:04:26.460 CXX test/cpp_headers/vfio_user_pci.o 00:04:26.460 CXX test/cpp_headers/vfio_user_spec.o 00:04:26.460 CXX test/cpp_headers/vhost.o 00:04:26.460 CXX test/cpp_headers/vmd.o 00:04:26.460 CXX test/cpp_headers/xor.o 00:04:26.460 CXX test/cpp_headers/zipf.o 00:04:26.718 LINK memory_ut 00:04:27.283 LINK cuse 00:04:27.541 LINK iscsi_fuzz 00:04:30.066 LINK esnap 00:04:30.066 00:04:30.066 real 0m39.692s 00:04:30.066 user 7m25.686s 00:04:30.066 sys 1m47.156s 00:04:30.066 05:00:07 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:30.066 05:00:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.066 ************************************ 00:04:30.066 END TEST make 00:04:30.066 ************************************ 00:04:30.066 05:00:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:30.066 05:00:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:30.066 05:00:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:30.066 05:00:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.066 05:00:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:30.066 05:00:07 -- pm/common@45 -- $ pid=1655141 00:04:30.066 05:00:07 -- pm/common@52 -- $ sudo kill -TERM 1655141 00:04:30.324 05:00:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.324 05:00:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:30.324 05:00:07 -- pm/common@45 -- $ pid=1655140 00:04:30.324 05:00:07 -- pm/common@52 -- $ sudo kill -TERM 1655140 00:04:30.324 05:00:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.324 05:00:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:30.324 05:00:07 -- pm/common@45 -- $ pid=1655138 00:04:30.324 05:00:07 -- pm/common@52 -- $ sudo kill -TERM 1655138 00:04:30.324 05:00:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.324 05:00:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:30.324 05:00:07 -- pm/common@45 -- $ pid=1655139 00:04:30.324 05:00:07 -- pm/common@52 -- $ sudo kill -TERM 1655139 00:04:30.324 05:00:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.324 05:00:07 -- nvmf/common.sh@7 -- # uname -s 00:04:30.324 05:00:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.324 05:00:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.324 05:00:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.324 05:00:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.324 05:00:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.324 05:00:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.324 05:00:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.324 05:00:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.324 05:00:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.324 05:00:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.324 05:00:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:30.324 05:00:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:30.324 05:00:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.324 05:00:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.324 05:00:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:30.324 05:00:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.324 05:00:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.324 05:00:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.324 05:00:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.324 05:00:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.324 05:00:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.324 05:00:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.324 05:00:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.324 05:00:07 -- paths/export.sh@5 -- # export PATH 00:04:30.324 05:00:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.324 05:00:07 -- nvmf/common.sh@47 -- # : 0 00:04:30.324 05:00:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:30.325 05:00:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:30.325 05:00:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.325 05:00:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.325 05:00:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.325 05:00:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:30.325 05:00:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:30.325 05:00:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:30.325 05:00:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:30.325 05:00:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:30.325 05:00:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:30.325 05:00:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:30.325 05:00:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:30.325 05:00:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:30.325 05:00:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:30.325 05:00:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:30.325 05:00:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:30.325 05:00:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:30.325 05:00:07 -- spdk/autotest.sh@48 -- # udevadm_pid=1731812 00:04:30.325 05:00:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:30.325 05:00:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:30.325 05:00:07 -- pm/common@17 -- # local monitor 00:04:30.325 05:00:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.325 05:00:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1731814 00:04:30.325 05:00:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.325 05:00:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1731816 00:04:30.325 05:00:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.325 05:00:07 -- pm/common@21 -- # date +%s 00:04:30.325 05:00:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1731819 00:04:30.325 05:00:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.325 05:00:07 -- pm/common@21 -- # date +%s 00:04:30.325 05:00:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1731823 00:04:30.325 05:00:07 -- pm/common@26 -- # sleep 1 00:04:30.325 05:00:07 -- pm/common@21 -- # date +%s 00:04:30.325 05:00:07 -- pm/common@21 -- # date +%s 00:04:30.325 05:00:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713927607 00:04:30.325 05:00:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713927607 00:04:30.325 05:00:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713927607 00:04:30.325 05:00:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713927607 00:04:30.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713927607_collect-vmstat.pm.log 00:04:30.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713927607_collect-bmc-pm.bmc.pm.log 00:04:30.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713927607_collect-cpu-load.pm.log 00:04:30.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713927607_collect-cpu-temp.pm.log 00:04:31.263 05:00:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:31.263 05:00:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:31.263 05:00:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:31.263 05:00:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.263 05:00:08 -- spdk/autotest.sh@59 -- # create_test_list 00:04:31.263 05:00:08 -- common/autotest_common.sh@734 -- # xtrace_disable 00:04:31.263 05:00:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.521 05:00:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:31.521 05:00:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.521 05:00:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.521 05:00:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:31.521 05:00:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.521 05:00:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:31.521 05:00:08 -- common/autotest_common.sh@1441 -- # uname 00:04:31.521 05:00:08 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:04:31.521 05:00:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:31.521 05:00:08 -- common/autotest_common.sh@1461 -- # uname 00:04:31.521 05:00:08 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:04:31.521 05:00:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:31.521 05:00:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:31.521 05:00:08 -- spdk/autotest.sh@72 -- # hash lcov 00:04:31.521 05:00:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:31.521 05:00:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:31.521 --rc lcov_branch_coverage=1 00:04:31.521 --rc lcov_function_coverage=1 00:04:31.521 --rc genhtml_branch_coverage=1 00:04:31.521 --rc genhtml_function_coverage=1 00:04:31.521 --rc genhtml_legend=1 00:04:31.521 --rc geninfo_all_blocks=1 00:04:31.521 ' 00:04:31.521 05:00:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:31.521 --rc lcov_branch_coverage=1 00:04:31.521 --rc lcov_function_coverage=1 00:04:31.521 --rc genhtml_branch_coverage=1 00:04:31.521 --rc genhtml_function_coverage=1 00:04:31.521 --rc genhtml_legend=1 00:04:31.521 --rc geninfo_all_blocks=1 00:04:31.521 ' 00:04:31.521 05:00:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:31.521 --rc lcov_branch_coverage=1 00:04:31.521 --rc lcov_function_coverage=1 00:04:31.521 --rc genhtml_branch_coverage=1 00:04:31.521 --rc genhtml_function_coverage=1 00:04:31.521 --rc genhtml_legend=1 00:04:31.521 --rc geninfo_all_blocks=1 00:04:31.521 --no-external' 00:04:31.521 05:00:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:31.521 --rc lcov_branch_coverage=1 00:04:31.521 --rc lcov_function_coverage=1 00:04:31.521 --rc genhtml_branch_coverage=1 00:04:31.521 --rc genhtml_function_coverage=1 00:04:31.521 --rc genhtml_legend=1 00:04:31.521 --rc geninfo_all_blocks=1 00:04:31.521 --no-external' 00:04:31.521 05:00:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:31.521 lcov: LCOV version 1.14 00:04:31.521 05:00:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:41.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:41.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:41.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:41.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:41.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:41.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:46.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:46.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:01.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:01.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:01.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:01.649 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:04.177 05:00:40 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:04.177 05:00:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:04.177 05:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:04.177 05:00:40 -- spdk/autotest.sh@91 -- # rm -f 00:05:04.177 05:00:40 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.111 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:05.111 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:05.111 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:05.111 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:05.111 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:05.111 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:05.111 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:05.111 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:05.111 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:05.111 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:05.111 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:05.111 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:05.111 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:05.111 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:05.111 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:05.111 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:05.111 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:05.370 05:00:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:05.370 05:00:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:05.370 05:00:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:05.370 05:00:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:05.370 05:00:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:05.370 05:00:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:05.370 05:00:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:05.370 05:00:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.370 05:00:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:05.370 05:00:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:05.370 05:00:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.370 05:00:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.370 05:00:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:05.370 05:00:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:05.370 05:00:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:05.370 No valid GPT data, bailing 00:05:05.370 05:00:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.370 05:00:42 -- scripts/common.sh@391 -- # pt= 00:05:05.370 05:00:42 -- scripts/common.sh@392 -- # return 1 00:05:05.370 05:00:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:05.370 1+0 records in 00:05:05.370 1+0 records out 00:05:05.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00243199 s, 431 MB/s 00:05:05.370 05:00:42 -- spdk/autotest.sh@118 -- # sync 00:05:05.370 05:00:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:05.370 05:00:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:05.370 05:00:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.269 05:00:44 -- spdk/autotest.sh@124 -- # uname -s 00:05:07.269 05:00:44 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:07.269 05:00:44 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:07.269 05:00:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.269 05:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.269 05:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.269 ************************************ 00:05:07.269 START TEST setup.sh 00:05:07.269 ************************************ 00:05:07.269 05:00:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:07.269 * Looking for test storage... 00:05:07.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.269 05:00:44 -- setup/test-setup.sh@10 -- # uname -s 00:05:07.269 05:00:44 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:07.269 05:00:44 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:07.269 05:00:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.269 05:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.269 05:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.269 ************************************ 00:05:07.269 START TEST acl 00:05:07.269 ************************************ 00:05:07.269 05:00:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:07.269 * Looking for test storage... 00:05:07.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.269 05:00:44 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:07.269 05:00:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:07.269 05:00:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:07.269 05:00:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:07.269 05:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:07.269 05:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:07.269 05:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:07.269 05:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.269 05:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:07.269 05:00:44 -- setup/acl.sh@12 -- # devs=() 00:05:07.269 05:00:44 -- setup/acl.sh@12 -- # declare -a devs 00:05:07.269 05:00:44 -- setup/acl.sh@13 -- # drivers=() 00:05:07.269 05:00:44 -- setup/acl.sh@13 -- # declare -A drivers 00:05:07.269 05:00:44 -- setup/acl.sh@51 -- # setup reset 00:05:07.269 05:00:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.269 05:00:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.172 05:00:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:09.172 05:00:45 -- setup/acl.sh@16 -- # local dev driver 00:05:09.172 05:00:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:09.172 05:00:45 -- setup/acl.sh@15 -- # setup output status 00:05:09.172 05:00:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.172 05:00:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:10.102 Hugepages 00:05:10.102 node hugesize free / total 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.102 00:05:10.102 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.102 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.102 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:10.102 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.102 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.102 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # continue 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.103 05:00:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:10.103 05:00:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.103 05:00:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.103 05:00:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.103 05:00:47 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:10.103 05:00:47 -- setup/acl.sh@54 -- # run_test denied denied 00:05:10.103 05:00:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.103 05:00:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.103 05:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.103 ************************************ 00:05:10.103 START TEST denied 00:05:10.103 ************************************ 00:05:10.103 05:00:47 -- common/autotest_common.sh@1111 -- # denied 00:05:10.103 05:00:47 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:05:10.103 05:00:47 -- setup/acl.sh@38 -- # setup output config 00:05:10.103 05:00:47 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:05:10.103 05:00:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.103 05:00:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.003 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:05:12.003 05:00:48 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:05:12.003 05:00:48 -- setup/acl.sh@28 -- # local dev driver 00:05:12.003 05:00:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:12.003 05:00:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:05:12.003 05:00:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:05:12.003 05:00:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:12.003 05:00:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:12.003 05:00:48 -- setup/acl.sh@41 -- # setup reset 00:05:12.003 05:00:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.003 05:00:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.903 00:05:13.903 real 0m3.805s 00:05:13.903 user 0m1.067s 00:05:13.903 sys 0m1.827s 00:05:13.903 05:00:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.903 05:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:13.903 ************************************ 00:05:13.903 END TEST denied 00:05:13.903 ************************************ 00:05:13.903 05:00:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:13.903 05:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.903 05:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.903 05:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 ************************************ 00:05:14.166 START TEST allowed 00:05:14.166 ************************************ 00:05:14.166 05:00:51 -- common/autotest_common.sh@1111 -- # allowed 00:05:14.166 05:00:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.166 05:00:51 -- setup/acl.sh@45 -- # setup output config 00:05:14.166 05:00:51 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:05:14.166 05:00:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.166 05:00:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.694 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.694 05:00:53 -- setup/acl.sh@47 -- # verify 00:05:16.694 05:00:53 -- setup/acl.sh@28 -- # local dev driver 00:05:16.694 05:00:53 -- setup/acl.sh@48 -- # setup reset 00:05:16.694 05:00:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.694 05:00:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.065 00:05:18.065 real 0m3.771s 00:05:18.066 user 0m0.970s 00:05:18.066 sys 0m1.653s 00:05:18.066 05:00:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.066 05:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:18.066 ************************************ 00:05:18.066 END TEST allowed 00:05:18.066 ************************************ 00:05:18.066 00:05:18.066 real 0m10.505s 00:05:18.066 user 0m3.160s 00:05:18.066 sys 0m5.325s 00:05:18.066 05:00:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.066 05:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:18.066 ************************************ 00:05:18.066 END TEST acl 00:05:18.066 ************************************ 00:05:18.066 05:00:55 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:18.066 05:00:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.066 05:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.066 05:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.066 ************************************ 00:05:18.066 START TEST hugepages 00:05:18.066 ************************************ 00:05:18.066 05:00:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:18.066 * Looking for test storage... 00:05:18.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:18.066 05:00:55 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:18.066 05:00:55 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:18.066 05:00:55 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:18.066 05:00:55 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:18.066 05:00:55 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:18.066 05:00:55 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:18.066 05:00:55 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:18.066 05:00:55 -- setup/common.sh@18 -- # local node= 00:05:18.066 05:00:55 -- setup/common.sh@19 -- # local var val 00:05:18.066 05:00:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.066 05:00:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.066 05:00:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.066 05:00:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.066 05:00:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.066 05:00:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 37298404 kB' 'MemAvailable: 42397400 kB' 'Buffers: 3108 kB' 'Cached: 16228020 kB' 'SwapCached: 0 kB' 'Active: 12139252 kB' 'Inactive: 4630024 kB' 'Active(anon): 11573404 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541548 kB' 'Mapped: 198292 kB' 'Shmem: 11035256 kB' 'KReclaimable: 547560 kB' 'Slab: 939276 kB' 'SReclaimable: 547560 kB' 'SUnreclaim: 391716 kB' 'KernelStack: 13152 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 12755008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196540 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.066 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.066 05:00:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # continue 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.067 05:00:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.067 05:00:55 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.067 05:00:55 -- setup/common.sh@33 -- # echo 2048 00:05:18.067 05:00:55 -- setup/common.sh@33 -- # return 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:18.067 05:00:55 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:18.067 05:00:55 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:18.067 05:00:55 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:18.067 05:00:55 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:18.067 05:00:55 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:18.067 05:00:55 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:18.067 05:00:55 -- setup/hugepages.sh@207 -- # get_nodes 00:05:18.067 05:00:55 -- setup/hugepages.sh@27 -- # local node 00:05:18.067 05:00:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.067 05:00:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:18.067 05:00:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.067 05:00:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:18.067 05:00:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:18.067 05:00:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.067 05:00:55 -- setup/hugepages.sh@208 -- # clear_hp 00:05:18.067 05:00:55 -- setup/hugepages.sh@37 -- # local node hp 00:05:18.067 05:00:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.067 05:00:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.067 05:00:55 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.067 05:00:55 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.067 05:00:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.067 05:00:55 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.067 05:00:55 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.067 05:00:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.067 05:00:55 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:18.067 05:00:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.067 05:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.067 05:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.067 ************************************ 00:05:18.067 START TEST default_setup 00:05:18.067 ************************************ 00:05:18.067 05:00:55 -- common/autotest_common.sh@1111 -- # default_setup 00:05:18.067 05:00:55 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.067 05:00:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.067 05:00:55 -- setup/hugepages.sh@51 -- # shift 00:05:18.067 05:00:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.067 05:00:55 -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.067 05:00:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.067 05:00:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.067 05:00:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.067 05:00:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.067 05:00:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.067 05:00:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.067 05:00:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:18.067 05:00:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.068 05:00:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.068 05:00:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.068 05:00:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.068 05:00:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:18.068 05:00:55 -- setup/hugepages.sh@73 -- # return 0 00:05:18.068 05:00:55 -- setup/hugepages.sh@137 -- # setup output 00:05:18.068 05:00:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.068 05:00:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.442 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:19.442 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:19.443 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.379 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.379 05:00:57 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:20.379 05:00:57 -- setup/hugepages.sh@89 -- # local node 00:05:20.379 05:00:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.379 05:00:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.379 05:00:57 -- setup/hugepages.sh@92 -- # local surp 00:05:20.379 05:00:57 -- setup/hugepages.sh@93 -- # local resv 00:05:20.379 05:00:57 -- setup/hugepages.sh@94 -- # local anon 00:05:20.379 05:00:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.379 05:00:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.379 05:00:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.379 05:00:57 -- setup/common.sh@18 -- # local node= 00:05:20.379 05:00:57 -- setup/common.sh@19 -- # local var val 00:05:20.379 05:00:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.379 05:00:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.379 05:00:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.379 05:00:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.379 05:00:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.379 05:00:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.379 05:00:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39322172 kB' 'MemAvailable: 44421088 kB' 'Buffers: 3108 kB' 'Cached: 16228108 kB' 'SwapCached: 0 kB' 'Active: 12150712 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584864 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552864 kB' 'Mapped: 197976 kB' 'Shmem: 11035344 kB' 'KReclaimable: 547480 kB' 'Slab: 938756 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391276 kB' 'KernelStack: 12864 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12765756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.379 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.379 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.380 05:00:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.380 05:00:57 -- setup/common.sh@33 -- # echo 0 00:05:20.380 05:00:57 -- setup/common.sh@33 -- # return 0 00:05:20.380 05:00:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.380 05:00:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.380 05:00:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.380 05:00:57 -- setup/common.sh@18 -- # local node= 00:05:20.380 05:00:57 -- setup/common.sh@19 -- # local var val 00:05:20.380 05:00:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.380 05:00:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.380 05:00:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.380 05:00:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.380 05:00:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.380 05:00:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.380 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39323436 kB' 'MemAvailable: 44422352 kB' 'Buffers: 3108 kB' 'Cached: 16228112 kB' 'SwapCached: 0 kB' 'Active: 12150816 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584968 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552956 kB' 'Mapped: 197952 kB' 'Shmem: 11035348 kB' 'KReclaimable: 547480 kB' 'Slab: 938740 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391260 kB' 'KernelStack: 12832 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12765768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.381 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.381 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.382 05:00:57 -- setup/common.sh@33 -- # echo 0 00:05:20.382 05:00:57 -- setup/common.sh@33 -- # return 0 00:05:20.382 05:00:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.382 05:00:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.382 05:00:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.382 05:00:57 -- setup/common.sh@18 -- # local node= 00:05:20.382 05:00:57 -- setup/common.sh@19 -- # local var val 00:05:20.382 05:00:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.382 05:00:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.382 05:00:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.382 05:00:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.382 05:00:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.382 05:00:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39323832 kB' 'MemAvailable: 44422748 kB' 'Buffers: 3108 kB' 'Cached: 16228124 kB' 'SwapCached: 0 kB' 'Active: 12150668 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584820 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552740 kB' 'Mapped: 197876 kB' 'Shmem: 11035360 kB' 'KReclaimable: 547480 kB' 'Slab: 938732 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391252 kB' 'KernelStack: 12832 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12765784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.382 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.382 05:00:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.383 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.383 05:00:57 -- setup/common.sh@33 -- # echo 0 00:05:20.383 05:00:57 -- setup/common.sh@33 -- # return 0 00:05:20.383 05:00:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.383 05:00:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.383 nr_hugepages=1024 00:05:20.383 05:00:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.383 resv_hugepages=0 00:05:20.383 05:00:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.383 surplus_hugepages=0 00:05:20.383 05:00:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.383 anon_hugepages=0 00:05:20.383 05:00:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.383 05:00:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.383 05:00:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.383 05:00:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.383 05:00:57 -- setup/common.sh@18 -- # local node= 00:05:20.383 05:00:57 -- setup/common.sh@19 -- # local var val 00:05:20.383 05:00:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.383 05:00:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.383 05:00:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.383 05:00:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.383 05:00:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.383 05:00:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.383 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39323832 kB' 'MemAvailable: 44422748 kB' 'Buffers: 3108 kB' 'Cached: 16228136 kB' 'SwapCached: 0 kB' 'Active: 12150484 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584636 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552560 kB' 'Mapped: 197876 kB' 'Shmem: 11035372 kB' 'KReclaimable: 547480 kB' 'Slab: 938732 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391252 kB' 'KernelStack: 12848 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12765796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.384 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.384 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.644 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.644 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.645 05:00:57 -- setup/common.sh@33 -- # echo 1024 00:05:20.645 05:00:57 -- setup/common.sh@33 -- # return 0 00:05:20.645 05:00:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.645 05:00:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.645 05:00:57 -- setup/hugepages.sh@27 -- # local node 00:05:20.645 05:00:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.645 05:00:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.645 05:00:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.645 05:00:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:20.645 05:00:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.645 05:00:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.645 05:00:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.645 05:00:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.645 05:00:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.645 05:00:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.645 05:00:57 -- setup/common.sh@18 -- # local node=0 00:05:20.645 05:00:57 -- setup/common.sh@19 -- # local var val 00:05:20.645 05:00:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.645 05:00:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.645 05:00:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.645 05:00:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.645 05:00:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.645 05:00:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22628856 kB' 'MemUsed: 10201028 kB' 'SwapCached: 0 kB' 'Active: 6560504 kB' 'Inactive: 252100 kB' 'Active(anon): 6207616 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643308 kB' 'Mapped: 72596 kB' 'AnonPages: 172452 kB' 'Shmem: 6038320 kB' 'KernelStack: 7336 kB' 'PageTables: 5028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281008 kB' 'Slab: 504088 kB' 'SReclaimable: 281008 kB' 'SUnreclaim: 223080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.645 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.645 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # continue 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.646 05:00:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.646 05:00:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.646 05:00:57 -- setup/common.sh@33 -- # echo 0 00:05:20.646 05:00:57 -- setup/common.sh@33 -- # return 0 00:05:20.646 05:00:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.646 05:00:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.646 05:00:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.646 05:00:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.646 05:00:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.646 node0=1024 expecting 1024 00:05:20.646 05:00:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.646 00:05:20.646 real 0m2.392s 00:05:20.646 user 0m0.619s 00:05:20.646 sys 0m0.897s 00:05:20.646 05:00:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.646 05:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:20.646 ************************************ 00:05:20.646 END TEST default_setup 00:05:20.646 ************************************ 00:05:20.646 05:00:57 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:20.646 05:00:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.646 05:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.646 05:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:20.646 ************************************ 00:05:20.646 START TEST per_node_1G_alloc 00:05:20.646 ************************************ 00:05:20.646 05:00:57 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:20.646 05:00:57 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:20.646 05:00:57 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:20.646 05:00:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.646 05:00:57 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:20.646 05:00:57 -- setup/hugepages.sh@51 -- # shift 00:05:20.646 05:00:57 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:20.646 05:00:57 -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.646 05:00:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.646 05:00:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.646 05:00:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:20.646 05:00:57 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:20.646 05:00:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.646 05:00:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.646 05:00:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.646 05:00:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.646 05:00:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.646 05:00:57 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:20.646 05:00:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.646 05:00:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.646 05:00:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.646 05:00:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.646 05:00:57 -- setup/hugepages.sh@73 -- # return 0 00:05:20.646 05:00:57 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:20.646 05:00:57 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:20.646 05:00:57 -- setup/hugepages.sh@146 -- # setup output 00:05:20.646 05:00:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.646 05:00:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.033 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:22.033 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:22.033 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:22.033 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:22.033 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:22.033 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:22.033 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:22.033 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:22.033 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:22.033 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:22.033 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:22.033 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:22.033 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:22.033 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:22.033 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:22.033 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:22.033 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:22.033 05:00:59 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:22.033 05:00:59 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:22.033 05:00:59 -- setup/hugepages.sh@89 -- # local node 00:05:22.033 05:00:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.033 05:00:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.033 05:00:59 -- setup/hugepages.sh@92 -- # local surp 00:05:22.033 05:00:59 -- setup/hugepages.sh@93 -- # local resv 00:05:22.033 05:00:59 -- setup/hugepages.sh@94 -- # local anon 00:05:22.033 05:00:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.033 05:00:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.033 05:00:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.033 05:00:59 -- setup/common.sh@18 -- # local node= 00:05:22.033 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.033 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.033 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.033 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.033 05:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.033 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.033 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.033 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.033 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39320724 kB' 'MemAvailable: 44419640 kB' 'Buffers: 3108 kB' 'Cached: 16228196 kB' 'SwapCached: 0 kB' 'Active: 12151384 kB' 'Inactive: 4630024 kB' 'Active(anon): 11585536 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553400 kB' 'Mapped: 197928 kB' 'Shmem: 11035432 kB' 'KReclaimable: 547480 kB' 'Slab: 938816 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391336 kB' 'KernelStack: 12880 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12768276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.034 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.034 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.035 05:00:59 -- setup/common.sh@33 -- # echo 0 00:05:22.035 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.035 05:00:59 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.035 05:00:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.035 05:00:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.035 05:00:59 -- setup/common.sh@18 -- # local node= 00:05:22.035 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.035 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.035 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.035 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.035 05:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.035 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.035 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39324992 kB' 'MemAvailable: 44423908 kB' 'Buffers: 3108 kB' 'Cached: 16228200 kB' 'SwapCached: 0 kB' 'Active: 12151172 kB' 'Inactive: 4630024 kB' 'Active(anon): 11585324 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553136 kB' 'Mapped: 197928 kB' 'Shmem: 11035436 kB' 'KReclaimable: 547480 kB' 'Slab: 938796 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391316 kB' 'KernelStack: 12896 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12765992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.035 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.035 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.036 05:00:59 -- setup/common.sh@33 -- # echo 0 00:05:22.036 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.036 05:00:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.036 05:00:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.036 05:00:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.036 05:00:59 -- setup/common.sh@18 -- # local node= 00:05:22.036 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.036 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.036 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.036 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.036 05:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.036 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.036 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39325552 kB' 'MemAvailable: 44424468 kB' 'Buffers: 3108 kB' 'Cached: 16228216 kB' 'SwapCached: 0 kB' 'Active: 12151028 kB' 'Inactive: 4630024 kB' 'Active(anon): 11585180 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553012 kB' 'Mapped: 197912 kB' 'Shmem: 11035452 kB' 'KReclaimable: 547480 kB' 'Slab: 938796 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391316 kB' 'KernelStack: 12864 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12766004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.036 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.036 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.037 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.037 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.037 05:00:59 -- setup/common.sh@33 -- # echo 0 00:05:22.037 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.037 05:00:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.038 05:00:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.038 nr_hugepages=1024 00:05:22.038 05:00:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.038 resv_hugepages=0 00:05:22.038 05:00:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.038 surplus_hugepages=0 00:05:22.038 05:00:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.038 anon_hugepages=0 00:05:22.038 05:00:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.038 05:00:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.038 05:00:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.038 05:00:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.038 05:00:59 -- setup/common.sh@18 -- # local node= 00:05:22.038 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.038 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.038 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.038 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.038 05:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.038 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.038 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39326144 kB' 'MemAvailable: 44425060 kB' 'Buffers: 3108 kB' 'Cached: 16228232 kB' 'SwapCached: 0 kB' 'Active: 12151040 kB' 'Inactive: 4630024 kB' 'Active(anon): 11585192 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553016 kB' 'Mapped: 197912 kB' 'Shmem: 11035468 kB' 'KReclaimable: 547480 kB' 'Slab: 938796 kB' 'SReclaimable: 547480 kB' 'SUnreclaim: 391316 kB' 'KernelStack: 12864 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12766020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.038 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.038 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.039 05:00:59 -- setup/common.sh@33 -- # echo 1024 00:05:22.039 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.039 05:00:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.039 05:00:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.039 05:00:59 -- setup/hugepages.sh@27 -- # local node 00:05:22.039 05:00:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.039 05:00:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:22.039 05:00:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.039 05:00:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:22.039 05:00:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:22.039 05:00:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.039 05:00:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.039 05:00:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.039 05:00:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.039 05:00:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.039 05:00:59 -- setup/common.sh@18 -- # local node=0 00:05:22.039 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.039 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.039 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.039 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.039 05:00:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.039 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.039 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23679340 kB' 'MemUsed: 9150544 kB' 'SwapCached: 0 kB' 'Active: 6561004 kB' 'Inactive: 252100 kB' 'Active(anon): 6208116 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643352 kB' 'Mapped: 72596 kB' 'AnonPages: 173048 kB' 'Shmem: 6038364 kB' 'KernelStack: 7336 kB' 'PageTables: 5152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281008 kB' 'Slab: 504140 kB' 'SReclaimable: 281008 kB' 'SUnreclaim: 223132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.039 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.039 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@33 -- # echo 0 00:05:22.040 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.040 05:00:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.040 05:00:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.040 05:00:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.040 05:00:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:22.040 05:00:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.040 05:00:59 -- setup/common.sh@18 -- # local node=1 00:05:22.040 05:00:59 -- setup/common.sh@19 -- # local var val 00:05:22.040 05:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.040 05:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.040 05:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:22.040 05:00:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:22.040 05:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.040 05:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15647404 kB' 'MemUsed: 12064420 kB' 'SwapCached: 0 kB' 'Active: 5590440 kB' 'Inactive: 4377924 kB' 'Active(anon): 5377480 kB' 'Inactive(anon): 0 kB' 'Active(file): 212960 kB' 'Inactive(file): 4377924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9588012 kB' 'Mapped: 125316 kB' 'AnonPages: 380412 kB' 'Shmem: 4997128 kB' 'KernelStack: 5512 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266472 kB' 'Slab: 434656 kB' 'SReclaimable: 266472 kB' 'SUnreclaim: 168184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.040 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.040 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # continue 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.041 05:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.041 05:00:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.041 05:00:59 -- setup/common.sh@33 -- # echo 0 00:05:22.041 05:00:59 -- setup/common.sh@33 -- # return 0 00:05:22.041 05:00:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.041 05:00:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.041 05:00:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.041 05:00:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.041 05:00:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:22.041 node0=512 expecting 512 00:05:22.041 05:00:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.041 05:00:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.041 05:00:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.041 05:00:59 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:22.041 node1=512 expecting 512 00:05:22.041 05:00:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:22.041 00:05:22.041 real 0m1.483s 00:05:22.041 user 0m0.637s 00:05:22.041 sys 0m0.806s 00:05:22.041 05:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.041 05:00:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.041 ************************************ 00:05:22.041 END TEST per_node_1G_alloc 00:05:22.041 ************************************ 00:05:22.041 05:00:59 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:22.041 05:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.041 05:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.041 05:00:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.300 ************************************ 00:05:22.300 START TEST even_2G_alloc 00:05:22.300 ************************************ 00:05:22.300 05:00:59 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:22.300 05:00:59 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:22.300 05:00:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.300 05:00:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.300 05:00:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.300 05:00:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.300 05:00:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.300 05:00:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.300 05:00:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:22.300 05:00:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.300 05:00:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.300 05:00:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:22.300 05:00:59 -- setup/hugepages.sh@83 -- # : 512 00:05:22.300 05:00:59 -- setup/hugepages.sh@84 -- # : 1 00:05:22.300 05:00:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:22.300 05:00:59 -- setup/hugepages.sh@83 -- # : 0 00:05:22.300 05:00:59 -- setup/hugepages.sh@84 -- # : 0 00:05:22.300 05:00:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.300 05:00:59 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:22.300 05:00:59 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:22.300 05:00:59 -- setup/hugepages.sh@153 -- # setup output 00:05:22.300 05:00:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.300 05:00:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.234 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:23.234 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:23.234 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:23.234 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:23.496 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:23.496 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:23.496 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:23.496 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:23.496 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:23.496 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:23.496 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:23.496 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:23.496 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:23.496 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:23.496 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:23.496 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:23.496 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:23.496 05:01:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:23.496 05:01:00 -- setup/hugepages.sh@89 -- # local node 00:05:23.496 05:01:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.496 05:01:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.496 05:01:00 -- setup/hugepages.sh@92 -- # local surp 00:05:23.496 05:01:00 -- setup/hugepages.sh@93 -- # local resv 00:05:23.496 05:01:00 -- setup/hugepages.sh@94 -- # local anon 00:05:23.496 05:01:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.496 05:01:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.496 05:01:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.496 05:01:00 -- setup/common.sh@18 -- # local node= 00:05:23.496 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.496 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.496 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.496 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.496 05:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.496 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.496 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39407404 kB' 'MemAvailable: 44506312 kB' 'Buffers: 3108 kB' 'Cached: 16228300 kB' 'SwapCached: 0 kB' 'Active: 12150248 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584400 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552080 kB' 'Mapped: 197928 kB' 'Shmem: 11035536 kB' 'KReclaimable: 547472 kB' 'Slab: 938880 kB' 'SReclaimable: 547472 kB' 'SUnreclaim: 391408 kB' 'KernelStack: 12880 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12764392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.496 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.496 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.497 05:01:00 -- setup/common.sh@33 -- # echo 0 00:05:23.497 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.497 05:01:00 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.497 05:01:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.497 05:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.497 05:01:00 -- setup/common.sh@18 -- # local node= 00:05:23.497 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.497 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.497 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.497 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.497 05:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.497 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.497 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39407584 kB' 'MemAvailable: 44506492 kB' 'Buffers: 3108 kB' 'Cached: 16228300 kB' 'SwapCached: 0 kB' 'Active: 12150360 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584512 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552216 kB' 'Mapped: 197924 kB' 'Shmem: 11035536 kB' 'KReclaimable: 547472 kB' 'Slab: 938876 kB' 'SReclaimable: 547472 kB' 'SUnreclaim: 391404 kB' 'KernelStack: 12880 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12764404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196536 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.497 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.497 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.498 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.498 05:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.499 05:01:00 -- setup/common.sh@33 -- # echo 0 00:05:23.499 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.499 05:01:00 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.499 05:01:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.499 05:01:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.499 05:01:00 -- setup/common.sh@18 -- # local node= 00:05:23.499 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.499 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.499 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.499 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.499 05:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.499 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.499 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39408168 kB' 'MemAvailable: 44507076 kB' 'Buffers: 3108 kB' 'Cached: 16228300 kB' 'SwapCached: 0 kB' 'Active: 12150732 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584884 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552576 kB' 'Mapped: 197924 kB' 'Shmem: 11035536 kB' 'KReclaimable: 547472 kB' 'Slab: 938860 kB' 'SReclaimable: 547472 kB' 'SUnreclaim: 391388 kB' 'KernelStack: 12880 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12764416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196536 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.499 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.499 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.500 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.500 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.761 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.761 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.762 05:01:00 -- setup/common.sh@33 -- # echo 0 00:05:23.762 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.762 05:01:00 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.762 05:01:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.762 nr_hugepages=1024 00:05:23.762 05:01:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.762 resv_hugepages=0 00:05:23.762 05:01:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.762 surplus_hugepages=0 00:05:23.762 05:01:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.762 anon_hugepages=0 00:05:23.762 05:01:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.762 05:01:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.762 05:01:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.762 05:01:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.762 05:01:00 -- setup/common.sh@18 -- # local node= 00:05:23.762 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.762 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.762 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.762 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.762 05:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.762 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.762 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39409552 kB' 'MemAvailable: 44508460 kB' 'Buffers: 3108 kB' 'Cached: 16228328 kB' 'SwapCached: 0 kB' 'Active: 12150668 kB' 'Inactive: 4630024 kB' 'Active(anon): 11584820 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552520 kB' 'Mapped: 197924 kB' 'Shmem: 11035564 kB' 'KReclaimable: 547472 kB' 'Slab: 938860 kB' 'SReclaimable: 547472 kB' 'SUnreclaim: 391388 kB' 'KernelStack: 12896 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12766824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.762 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.762 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.763 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.763 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.763 05:01:00 -- setup/common.sh@33 -- # echo 1024 00:05:23.763 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.763 05:01:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.763 05:01:00 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.763 05:01:00 -- setup/hugepages.sh@27 -- # local node 00:05:23.763 05:01:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.763 05:01:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:23.763 05:01:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.763 05:01:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:23.763 05:01:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:23.763 05:01:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.763 05:01:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.763 05:01:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.763 05:01:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.763 05:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.763 05:01:00 -- setup/common.sh@18 -- # local node=0 00:05:23.763 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.763 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.763 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.763 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.763 05:01:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.763 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.763 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23763932 kB' 'MemUsed: 9065952 kB' 'SwapCached: 0 kB' 'Active: 6558644 kB' 'Inactive: 252100 kB' 'Active(anon): 6205756 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643388 kB' 'Mapped: 72596 kB' 'AnonPages: 170444 kB' 'Shmem: 6038400 kB' 'KernelStack: 7592 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281000 kB' 'Slab: 504060 kB' 'SReclaimable: 281000 kB' 'SUnreclaim: 223060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.764 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.764 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@33 -- # echo 0 00:05:23.765 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.765 05:01:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.765 05:01:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.765 05:01:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.765 05:01:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:23.765 05:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.765 05:01:00 -- setup/common.sh@18 -- # local node=1 00:05:23.765 05:01:00 -- setup/common.sh@19 -- # local var val 00:05:23.765 05:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.765 05:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.765 05:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:23.765 05:01:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:23.765 05:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.765 05:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15644508 kB' 'MemUsed: 12067316 kB' 'SwapCached: 0 kB' 'Active: 5594972 kB' 'Inactive: 4377924 kB' 'Active(anon): 5382012 kB' 'Inactive(anon): 0 kB' 'Active(file): 212960 kB' 'Inactive(file): 4377924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9588064 kB' 'Mapped: 125764 kB' 'AnonPages: 384944 kB' 'Shmem: 4997180 kB' 'KernelStack: 5496 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266472 kB' 'Slab: 434792 kB' 'SReclaimable: 266472 kB' 'SUnreclaim: 168320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.765 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.765 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # continue 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.766 05:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.766 05:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.766 05:01:00 -- setup/common.sh@33 -- # echo 0 00:05:23.766 05:01:00 -- setup/common.sh@33 -- # return 0 00:05:23.766 05:01:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.766 05:01:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.766 05:01:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.766 05:01:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:23.766 node0=512 expecting 512 00:05:23.766 05:01:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.766 05:01:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.766 05:01:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.766 05:01:00 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:23.766 node1=512 expecting 512 00:05:23.766 05:01:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:23.766 00:05:23.766 real 0m1.437s 00:05:23.766 user 0m0.598s 00:05:23.766 sys 0m0.791s 00:05:23.766 05:01:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.766 05:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.766 ************************************ 00:05:23.766 END TEST even_2G_alloc 00:05:23.766 ************************************ 00:05:23.766 05:01:00 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:23.766 05:01:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.766 05:01:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.766 05:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.766 ************************************ 00:05:23.766 START TEST odd_alloc 00:05:23.766 ************************************ 00:05:23.766 05:01:00 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:23.766 05:01:00 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:23.766 05:01:00 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:23.766 05:01:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:23.766 05:01:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:23.766 05:01:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.766 05:01:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.766 05:01:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:23.766 05:01:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:23.766 05:01:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.766 05:01:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.766 05:01:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:23.766 05:01:00 -- setup/hugepages.sh@83 -- # : 513 00:05:23.766 05:01:00 -- setup/hugepages.sh@84 -- # : 1 00:05:23.766 05:01:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:23.766 05:01:00 -- setup/hugepages.sh@83 -- # : 0 00:05:23.766 05:01:00 -- setup/hugepages.sh@84 -- # : 0 00:05:23.766 05:01:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.766 05:01:00 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:23.766 05:01:00 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:23.766 05:01:00 -- setup/hugepages.sh@160 -- # setup output 00:05:23.766 05:01:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.766 05:01:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:25.147 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:25.147 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:25.147 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:25.147 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:25.147 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:25.147 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:25.147 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:25.147 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:25.147 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:25.147 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:25.147 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:25.147 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:25.147 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:25.147 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:25.147 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:25.147 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:25.147 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:25.147 05:01:02 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:25.147 05:01:02 -- setup/hugepages.sh@89 -- # local node 00:05:25.147 05:01:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.147 05:01:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.147 05:01:02 -- setup/hugepages.sh@92 -- # local surp 00:05:25.147 05:01:02 -- setup/hugepages.sh@93 -- # local resv 00:05:25.147 05:01:02 -- setup/hugepages.sh@94 -- # local anon 00:05:25.147 05:01:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.147 05:01:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.147 05:01:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.147 05:01:02 -- setup/common.sh@18 -- # local node= 00:05:25.147 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.147 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.147 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.147 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.147 05:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.147 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.147 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39380768 kB' 'MemAvailable: 44479604 kB' 'Buffers: 3108 kB' 'Cached: 16228400 kB' 'SwapCached: 0 kB' 'Active: 12147584 kB' 'Inactive: 4630024 kB' 'Active(anon): 11581736 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549412 kB' 'Mapped: 196864 kB' 'Shmem: 11035636 kB' 'KReclaimable: 547400 kB' 'Slab: 938768 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391368 kB' 'KernelStack: 12800 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12750968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196520 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.147 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.147 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.148 05:01:02 -- setup/common.sh@33 -- # echo 0 00:05:25.148 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.148 05:01:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.148 05:01:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.148 05:01:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.148 05:01:02 -- setup/common.sh@18 -- # local node= 00:05:25.148 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.148 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.148 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.148 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.148 05:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.148 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.148 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39384356 kB' 'MemAvailable: 44483192 kB' 'Buffers: 3108 kB' 'Cached: 16228404 kB' 'SwapCached: 0 kB' 'Active: 12147772 kB' 'Inactive: 4630024 kB' 'Active(anon): 11581924 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549616 kB' 'Mapped: 196844 kB' 'Shmem: 11035640 kB' 'KReclaimable: 547400 kB' 'Slab: 938768 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391368 kB' 'KernelStack: 12816 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12750980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196472 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.148 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.148 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.149 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.149 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.149 05:01:02 -- setup/common.sh@33 -- # echo 0 00:05:25.149 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.149 05:01:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.149 05:01:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.149 05:01:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.149 05:01:02 -- setup/common.sh@18 -- # local node= 00:05:25.149 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.149 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.149 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.149 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.150 05:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.150 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.150 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39384808 kB' 'MemAvailable: 44483644 kB' 'Buffers: 3108 kB' 'Cached: 16228412 kB' 'SwapCached: 0 kB' 'Active: 12147924 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582076 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549760 kB' 'Mapped: 196904 kB' 'Shmem: 11035648 kB' 'KReclaimable: 547400 kB' 'Slab: 938768 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391368 kB' 'KernelStack: 12800 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12750996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196456 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.150 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.150 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.151 05:01:02 -- setup/common.sh@33 -- # echo 0 00:05:25.151 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.151 05:01:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.151 05:01:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:25.151 nr_hugepages=1025 00:05:25.151 05:01:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.151 resv_hugepages=0 00:05:25.151 05:01:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.151 surplus_hugepages=0 00:05:25.151 05:01:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.151 anon_hugepages=0 00:05:25.151 05:01:02 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.151 05:01:02 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:25.151 05:01:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.151 05:01:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.151 05:01:02 -- setup/common.sh@18 -- # local node= 00:05:25.151 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.151 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.151 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.151 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.151 05:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.151 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.151 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39385688 kB' 'MemAvailable: 44484524 kB' 'Buffers: 3108 kB' 'Cached: 16228428 kB' 'SwapCached: 0 kB' 'Active: 12147628 kB' 'Inactive: 4630024 kB' 'Active(anon): 11581780 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549468 kB' 'Mapped: 196904 kB' 'Shmem: 11035664 kB' 'KReclaimable: 547400 kB' 'Slab: 938768 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391368 kB' 'KernelStack: 12800 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12751008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196456 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.151 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.151 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.152 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.152 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.152 05:01:02 -- setup/common.sh@33 -- # echo 1025 00:05:25.153 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.153 05:01:02 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.153 05:01:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.153 05:01:02 -- setup/hugepages.sh@27 -- # local node 00:05:25.153 05:01:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.153 05:01:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.153 05:01:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.153 05:01:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:25.153 05:01:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:25.153 05:01:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.153 05:01:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.153 05:01:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.153 05:01:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.153 05:01:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.153 05:01:02 -- setup/common.sh@18 -- # local node=0 00:05:25.153 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.153 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.153 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.153 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.153 05:01:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.153 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.153 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23755008 kB' 'MemUsed: 9074876 kB' 'SwapCached: 0 kB' 'Active: 6558272 kB' 'Inactive: 252100 kB' 'Active(anon): 6205384 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643432 kB' 'Mapped: 71636 kB' 'AnonPages: 170180 kB' 'Shmem: 6038444 kB' 'KernelStack: 7368 kB' 'PageTables: 4932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280928 kB' 'Slab: 504044 kB' 'SReclaimable: 280928 kB' 'SUnreclaim: 223116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.153 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.153 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@33 -- # echo 0 00:05:25.154 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.154 05:01:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.154 05:01:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.154 05:01:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.154 05:01:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:25.154 05:01:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.154 05:01:02 -- setup/common.sh@18 -- # local node=1 00:05:25.154 05:01:02 -- setup/common.sh@19 -- # local var val 00:05:25.154 05:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.154 05:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.154 05:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:25.154 05:01:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:25.154 05:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.154 05:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15630176 kB' 'MemUsed: 12081648 kB' 'SwapCached: 0 kB' 'Active: 5589036 kB' 'Inactive: 4377924 kB' 'Active(anon): 5376076 kB' 'Inactive(anon): 0 kB' 'Active(file): 212960 kB' 'Inactive(file): 4377924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9588108 kB' 'Mapped: 125268 kB' 'AnonPages: 378964 kB' 'Shmem: 4997224 kB' 'KernelStack: 5416 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266472 kB' 'Slab: 434724 kB' 'SReclaimable: 266472 kB' 'SUnreclaim: 168252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.154 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.154 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # continue 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.155 05:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.155 05:01:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.155 05:01:02 -- setup/common.sh@33 -- # echo 0 00:05:25.155 05:01:02 -- setup/common.sh@33 -- # return 0 00:05:25.155 05:01:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.155 05:01:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.155 05:01:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.155 05:01:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.155 05:01:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:25.155 node0=512 expecting 513 00:05:25.155 05:01:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.155 05:01:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.155 05:01:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.155 05:01:02 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:25.155 node1=513 expecting 512 00:05:25.155 05:01:02 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:25.155 00:05:25.155 real 0m1.385s 00:05:25.155 user 0m0.603s 00:05:25.155 sys 0m0.735s 00:05:25.155 05:01:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.155 05:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.155 ************************************ 00:05:25.155 END TEST odd_alloc 00:05:25.155 ************************************ 00:05:25.155 05:01:02 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:25.155 05:01:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.155 05:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.155 05:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.414 ************************************ 00:05:25.414 START TEST custom_alloc 00:05:25.414 ************************************ 00:05:25.414 05:01:02 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:25.414 05:01:02 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:25.414 05:01:02 -- setup/hugepages.sh@169 -- # local node 00:05:25.414 05:01:02 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:25.414 05:01:02 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:25.414 05:01:02 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:25.414 05:01:02 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:25.414 05:01:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:25.414 05:01:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.414 05:01:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:25.414 05:01:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.414 05:01:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:25.414 05:01:02 -- setup/hugepages.sh@83 -- # : 256 00:05:25.414 05:01:02 -- setup/hugepages.sh@84 -- # : 1 00:05:25.414 05:01:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:25.414 05:01:02 -- setup/hugepages.sh@83 -- # : 0 00:05:25.414 05:01:02 -- setup/hugepages.sh@84 -- # : 0 00:05:25.414 05:01:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:25.414 05:01:02 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:25.414 05:01:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.414 05:01:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.414 05:01:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.414 05:01:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.414 05:01:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.414 05:01:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:25.414 05:01:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:25.414 05:01:02 -- setup/hugepages.sh@78 -- # return 0 00:05:25.414 05:01:02 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:25.414 05:01:02 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:25.414 05:01:02 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:25.414 05:01:02 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:25.414 05:01:02 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:25.414 05:01:02 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.414 05:01:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.414 05:01:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.414 05:01:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.414 05:01:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:25.414 05:01:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:25.414 05:01:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:25.414 05:01:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:25.414 05:01:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:25.414 05:01:02 -- setup/hugepages.sh@78 -- # return 0 00:05:25.414 05:01:02 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:25.414 05:01:02 -- setup/hugepages.sh@187 -- # setup output 00:05:25.414 05:01:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.414 05:01:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.347 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:26.347 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:26.347 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:26.347 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:26.347 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:26.347 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:26.347 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:26.347 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:26.347 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:26.347 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:26.347 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:26.347 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:26.347 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:26.347 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:26.347 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:26.347 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:26.347 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:26.609 05:01:03 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:26.609 05:01:03 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:26.609 05:01:03 -- setup/hugepages.sh@89 -- # local node 00:05:26.609 05:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.609 05:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.609 05:01:03 -- setup/hugepages.sh@92 -- # local surp 00:05:26.609 05:01:03 -- setup/hugepages.sh@93 -- # local resv 00:05:26.609 05:01:03 -- setup/hugepages.sh@94 -- # local anon 00:05:26.609 05:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.609 05:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.609 05:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.609 05:01:03 -- setup/common.sh@18 -- # local node= 00:05:26.609 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.609 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.609 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.609 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.609 05:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.609 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.609 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.609 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38338560 kB' 'MemAvailable: 43437396 kB' 'Buffers: 3108 kB' 'Cached: 16228500 kB' 'SwapCached: 0 kB' 'Active: 12147680 kB' 'Inactive: 4630024 kB' 'Active(anon): 11581832 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549292 kB' 'Mapped: 197008 kB' 'Shmem: 11035736 kB' 'KReclaimable: 547400 kB' 'Slab: 938576 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391176 kB' 'KernelStack: 12800 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12751196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.610 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.611 05:01:03 -- setup/common.sh@33 -- # echo 0 00:05:26.611 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.611 05:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.611 05:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.611 05:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.611 05:01:03 -- setup/common.sh@18 -- # local node= 00:05:26.611 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.611 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.611 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.611 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.611 05:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.611 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.611 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38338760 kB' 'MemAvailable: 43437596 kB' 'Buffers: 3108 kB' 'Cached: 16228500 kB' 'SwapCached: 0 kB' 'Active: 12147444 kB' 'Inactive: 4630024 kB' 'Active(anon): 11581596 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549056 kB' 'Mapped: 197004 kB' 'Shmem: 11035736 kB' 'KReclaimable: 547400 kB' 'Slab: 938560 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391160 kB' 'KernelStack: 12736 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12751208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196552 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:01:03 -- setup/common.sh@33 -- # echo 0 00:05:26.612 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.612 05:01:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.612 05:01:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.612 05:01:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.612 05:01:03 -- setup/common.sh@18 -- # local node= 00:05:26.612 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.612 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.612 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.612 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.612 05:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.612 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.612 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38338672 kB' 'MemAvailable: 43437508 kB' 'Buffers: 3108 kB' 'Cached: 16228512 kB' 'SwapCached: 0 kB' 'Active: 12147968 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582120 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549480 kB' 'Mapped: 196932 kB' 'Shmem: 11035748 kB' 'KReclaimable: 547400 kB' 'Slab: 938604 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391204 kB' 'KernelStack: 12800 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12751224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.612 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:01:03 -- setup/common.sh@33 -- # echo 0 00:05:26.613 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.613 05:01:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.613 05:01:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:26.613 nr_hugepages=1536 00:05:26.613 05:01:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.613 resv_hugepages=0 00:05:26.613 05:01:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.613 surplus_hugepages=0 00:05:26.613 05:01:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.613 anon_hugepages=0 00:05:26.613 05:01:03 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:26.613 05:01:03 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:26.613 05:01:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.613 05:01:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.613 05:01:03 -- setup/common.sh@18 -- # local node= 00:05:26.613 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.614 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.614 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.614 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.614 05:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.614 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.614 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38339360 kB' 'MemAvailable: 43438196 kB' 'Buffers: 3108 kB' 'Cached: 16228528 kB' 'SwapCached: 0 kB' 'Active: 12148208 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582360 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549560 kB' 'Mapped: 196932 kB' 'Shmem: 11035764 kB' 'KReclaimable: 547400 kB' 'Slab: 938604 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391204 kB' 'KernelStack: 12848 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12751236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:01:03 -- setup/common.sh@33 -- # echo 1536 00:05:26.615 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.615 05:01:03 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:26.615 05:01:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.615 05:01:03 -- setup/hugepages.sh@27 -- # local node 00:05:26.615 05:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.615 05:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.615 05:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.615 05:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.615 05:01:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:26.615 05:01:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.615 05:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.615 05:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.615 05:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.615 05:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.615 05:01:03 -- setup/common.sh@18 -- # local node=0 00:05:26.615 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.615 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.615 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.615 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.615 05:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.615 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.615 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23761260 kB' 'MemUsed: 9068624 kB' 'SwapCached: 0 kB' 'Active: 6558944 kB' 'Inactive: 252100 kB' 'Active(anon): 6206056 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643540 kB' 'Mapped: 71636 kB' 'AnonPages: 170660 kB' 'Shmem: 6038552 kB' 'KernelStack: 7384 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280928 kB' 'Slab: 503880 kB' 'SReclaimable: 280928 kB' 'SUnreclaim: 222952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.615 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@33 -- # echo 0 00:05:26.616 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.616 05:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.616 05:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.616 05:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.616 05:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:26.616 05:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.616 05:01:03 -- setup/common.sh@18 -- # local node=1 00:05:26.616 05:01:03 -- setup/common.sh@19 -- # local var val 00:05:26.616 05:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.616 05:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.616 05:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:26.616 05:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:26.616 05:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.616 05:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14578044 kB' 'MemUsed: 13133780 kB' 'SwapCached: 0 kB' 'Active: 5589096 kB' 'Inactive: 4377924 kB' 'Active(anon): 5376136 kB' 'Inactive(anon): 0 kB' 'Active(file): 212960 kB' 'Inactive(file): 4377924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9588112 kB' 'Mapped: 125296 kB' 'AnonPages: 378912 kB' 'Shmem: 4997228 kB' 'KernelStack: 5448 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266472 kB' 'Slab: 434724 kB' 'SReclaimable: 266472 kB' 'SUnreclaim: 168252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.616 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.616 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.617 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.617 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:01:03 -- setup/common.sh@32 -- # continue 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:01:03 -- setup/common.sh@33 -- # echo 0 00:05:26.879 05:01:03 -- setup/common.sh@33 -- # return 0 00:05:26.879 05:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.879 05:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.879 05:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.879 05:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.880 05:01:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.880 node0=512 expecting 512 00:05:26.880 05:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.880 05:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.880 05:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.880 05:01:03 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:26.880 node1=1024 expecting 1024 00:05:26.880 05:01:03 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:26.880 00:05:26.880 real 0m1.424s 00:05:26.880 user 0m0.596s 00:05:26.880 sys 0m0.784s 00:05:26.880 05:01:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.880 05:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 ************************************ 00:05:26.880 END TEST custom_alloc 00:05:26.880 ************************************ 00:05:26.880 05:01:03 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:26.880 05:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.880 05:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.880 05:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 ************************************ 00:05:26.880 START TEST no_shrink_alloc 00:05:26.880 ************************************ 00:05:26.880 05:01:03 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:26.880 05:01:03 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:26.880 05:01:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.880 05:01:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:26.880 05:01:03 -- setup/hugepages.sh@51 -- # shift 00:05:26.880 05:01:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:26.880 05:01:03 -- setup/hugepages.sh@52 -- # local node_ids 00:05:26.880 05:01:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.880 05:01:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.880 05:01:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:26.880 05:01:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:26.880 05:01:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.880 05:01:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.880 05:01:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:26.880 05:01:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.880 05:01:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.880 05:01:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:26.880 05:01:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:26.880 05:01:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:26.880 05:01:03 -- setup/hugepages.sh@73 -- # return 0 00:05:26.880 05:01:03 -- setup/hugepages.sh@198 -- # setup output 00:05:26.880 05:01:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.880 05:01:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.816 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:27.816 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:27.816 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:27.816 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:27.816 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:27.816 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:27.816 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:27.816 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:28.077 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:28.077 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:28.077 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:28.077 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:28.077 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:28.077 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:28.077 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:28.077 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:28.077 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:28.077 05:01:05 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:28.077 05:01:05 -- setup/hugepages.sh@89 -- # local node 00:05:28.077 05:01:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.077 05:01:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.077 05:01:05 -- setup/hugepages.sh@92 -- # local surp 00:05:28.077 05:01:05 -- setup/hugepages.sh@93 -- # local resv 00:05:28.077 05:01:05 -- setup/hugepages.sh@94 -- # local anon 00:05:28.077 05:01:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.077 05:01:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.077 05:01:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.077 05:01:05 -- setup/common.sh@18 -- # local node= 00:05:28.077 05:01:05 -- setup/common.sh@19 -- # local var val 00:05:28.077 05:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.077 05:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.078 05:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.078 05:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.078 05:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.078 05:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39378696 kB' 'MemAvailable: 44477532 kB' 'Buffers: 3108 kB' 'Cached: 16228588 kB' 'SwapCached: 0 kB' 'Active: 12148268 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582420 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549876 kB' 'Mapped: 196984 kB' 'Shmem: 11035824 kB' 'KReclaimable: 547400 kB' 'Slab: 938596 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391196 kB' 'KernelStack: 12848 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12751284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.078 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.078 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.079 05:01:05 -- setup/common.sh@33 -- # echo 0 00:05:28.079 05:01:05 -- setup/common.sh@33 -- # return 0 00:05:28.079 05:01:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:28.079 05:01:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.079 05:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.079 05:01:05 -- setup/common.sh@18 -- # local node= 00:05:28.079 05:01:05 -- setup/common.sh@19 -- # local var val 00:05:28.079 05:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.079 05:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.079 05:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.079 05:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.079 05:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.079 05:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39380232 kB' 'MemAvailable: 44479068 kB' 'Buffers: 3108 kB' 'Cached: 16228588 kB' 'SwapCached: 0 kB' 'Active: 12149180 kB' 'Inactive: 4630024 kB' 'Active(anon): 11583332 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550680 kB' 'Mapped: 196892 kB' 'Shmem: 11035824 kB' 'KReclaimable: 547400 kB' 'Slab: 938576 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391176 kB' 'KernelStack: 12864 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12752476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.079 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.079 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.080 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.080 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.081 05:01:05 -- setup/common.sh@33 -- # echo 0 00:05:28.081 05:01:05 -- setup/common.sh@33 -- # return 0 00:05:28.081 05:01:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:28.081 05:01:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.081 05:01:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.081 05:01:05 -- setup/common.sh@18 -- # local node= 00:05:28.081 05:01:05 -- setup/common.sh@19 -- # local var val 00:05:28.081 05:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.081 05:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.081 05:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.081 05:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.081 05:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.081 05:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39379732 kB' 'MemAvailable: 44478568 kB' 'Buffers: 3108 kB' 'Cached: 16228604 kB' 'SwapCached: 0 kB' 'Active: 12148680 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582832 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550220 kB' 'Mapped: 196972 kB' 'Shmem: 11035840 kB' 'KReclaimable: 547400 kB' 'Slab: 938544 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391144 kB' 'KernelStack: 13056 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12753720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.081 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.081 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.082 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.082 05:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.343 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.343 05:01:05 -- setup/common.sh@33 -- # echo 0 00:05:28.343 05:01:05 -- setup/common.sh@33 -- # return 0 00:05:28.343 05:01:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:28.343 05:01:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:28.343 nr_hugepages=1024 00:05:28.343 05:01:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.343 resv_hugepages=0 00:05:28.343 05:01:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.343 surplus_hugepages=0 00:05:28.343 05:01:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.343 anon_hugepages=0 00:05:28.343 05:01:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.343 05:01:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:28.343 05:01:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.343 05:01:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.343 05:01:05 -- setup/common.sh@18 -- # local node= 00:05:28.343 05:01:05 -- setup/common.sh@19 -- # local var val 00:05:28.343 05:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.343 05:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.343 05:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.343 05:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.343 05:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.343 05:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.343 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39378216 kB' 'MemAvailable: 44477052 kB' 'Buffers: 3108 kB' 'Cached: 16228616 kB' 'SwapCached: 0 kB' 'Active: 12149628 kB' 'Inactive: 4630024 kB' 'Active(anon): 11583780 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551112 kB' 'Mapped: 196972 kB' 'Shmem: 11035852 kB' 'KReclaimable: 547400 kB' 'Slab: 938544 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391144 kB' 'KernelStack: 13216 kB' 'PageTables: 10256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12753732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196824 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.344 05:01:05 -- setup/common.sh@33 -- # echo 1024 00:05:28.344 05:01:05 -- setup/common.sh@33 -- # return 0 00:05:28.344 05:01:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.344 05:01:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.344 05:01:05 -- setup/hugepages.sh@27 -- # local node 00:05:28.344 05:01:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.344 05:01:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:28.344 05:01:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.344 05:01:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:28.344 05:01:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:28.344 05:01:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.344 05:01:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.344 05:01:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.344 05:01:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.344 05:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.344 05:01:05 -- setup/common.sh@18 -- # local node=0 00:05:28.344 05:01:05 -- setup/common.sh@19 -- # local var val 00:05:28.344 05:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.344 05:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.344 05:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.344 05:01:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.344 05:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.344 05:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22720760 kB' 'MemUsed: 10109124 kB' 'SwapCached: 0 kB' 'Active: 6559192 kB' 'Inactive: 252100 kB' 'Active(anon): 6206304 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643620 kB' 'Mapped: 71636 kB' 'AnonPages: 170804 kB' 'Shmem: 6038632 kB' 'KernelStack: 7368 kB' 'PageTables: 4832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280928 kB' 'Slab: 503848 kB' 'SReclaimable: 280928 kB' 'SUnreclaim: 222920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.344 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.344 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # continue 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.345 05:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.345 05:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.345 05:01:05 -- setup/common.sh@33 -- # echo 0 00:05:28.345 05:01:05 -- setup/common.sh@33 -- # return 0 00:05:28.345 05:01:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.345 05:01:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.345 05:01:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.345 05:01:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.345 05:01:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:28.345 node0=1024 expecting 1024 00:05:28.345 05:01:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:28.345 05:01:05 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:28.345 05:01:05 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:28.345 05:01:05 -- setup/hugepages.sh@202 -- # setup output 00:05:28.345 05:01:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.345 05:01:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.284 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:29.284 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:29.284 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:29.284 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:29.284 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:29.284 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:29.284 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:29.284 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:29.284 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:29.284 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:29.284 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:29.284 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:29.284 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:29.284 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:29.284 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:29.284 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:29.284 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:29.547 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:29.547 05:01:06 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:29.547 05:01:06 -- setup/hugepages.sh@89 -- # local node 00:05:29.547 05:01:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.547 05:01:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.547 05:01:06 -- setup/hugepages.sh@92 -- # local surp 00:05:29.547 05:01:06 -- setup/hugepages.sh@93 -- # local resv 00:05:29.547 05:01:06 -- setup/hugepages.sh@94 -- # local anon 00:05:29.547 05:01:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.547 05:01:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.547 05:01:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.547 05:01:06 -- setup/common.sh@18 -- # local node= 00:05:29.547 05:01:06 -- setup/common.sh@19 -- # local var val 00:05:29.547 05:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.547 05:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.547 05:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.547 05:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.547 05:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.547 05:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39363980 kB' 'MemAvailable: 44462816 kB' 'Buffers: 3108 kB' 'Cached: 16228672 kB' 'SwapCached: 0 kB' 'Active: 12152224 kB' 'Inactive: 4630024 kB' 'Active(anon): 11586376 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553636 kB' 'Mapped: 197476 kB' 'Shmem: 11035908 kB' 'KReclaimable: 547400 kB' 'Slab: 938852 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391452 kB' 'KernelStack: 12848 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12756156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.547 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.547 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.548 05:01:06 -- setup/common.sh@33 -- # echo 0 00:05:29.548 05:01:06 -- setup/common.sh@33 -- # return 0 00:05:29.548 05:01:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:29.548 05:01:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.548 05:01:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.548 05:01:06 -- setup/common.sh@18 -- # local node= 00:05:29.548 05:01:06 -- setup/common.sh@19 -- # local var val 00:05:29.548 05:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.548 05:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.548 05:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.548 05:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.548 05:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.548 05:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39359588 kB' 'MemAvailable: 44458424 kB' 'Buffers: 3108 kB' 'Cached: 16228680 kB' 'SwapCached: 0 kB' 'Active: 12154276 kB' 'Inactive: 4630024 kB' 'Active(anon): 11588428 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555740 kB' 'Mapped: 197956 kB' 'Shmem: 11035916 kB' 'KReclaimable: 547400 kB' 'Slab: 938848 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391448 kB' 'KernelStack: 12880 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12757632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196588 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.548 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.548 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.549 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.549 05:01:06 -- setup/common.sh@33 -- # echo 0 00:05:29.549 05:01:06 -- setup/common.sh@33 -- # return 0 00:05:29.549 05:01:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:29.549 05:01:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.549 05:01:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.549 05:01:06 -- setup/common.sh@18 -- # local node= 00:05:29.549 05:01:06 -- setup/common.sh@19 -- # local var val 00:05:29.549 05:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.549 05:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.549 05:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.549 05:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.549 05:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.549 05:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.549 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39360024 kB' 'MemAvailable: 44458860 kB' 'Buffers: 3108 kB' 'Cached: 16228684 kB' 'SwapCached: 0 kB' 'Active: 12148612 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582764 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549636 kB' 'Mapped: 197056 kB' 'Shmem: 11035920 kB' 'KReclaimable: 547400 kB' 'Slab: 938864 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391464 kB' 'KernelStack: 12864 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12751528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.550 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.550 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.551 05:01:06 -- setup/common.sh@33 -- # echo 0 00:05:29.551 05:01:06 -- setup/common.sh@33 -- # return 0 00:05:29.551 05:01:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:29.551 05:01:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.551 nr_hugepages=1024 00:05:29.551 05:01:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.551 resv_hugepages=0 00:05:29.551 05:01:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.551 surplus_hugepages=0 00:05:29.551 05:01:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.551 anon_hugepages=0 00:05:29.551 05:01:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.551 05:01:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.551 05:01:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.551 05:01:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.551 05:01:06 -- setup/common.sh@18 -- # local node= 00:05:29.551 05:01:06 -- setup/common.sh@19 -- # local var val 00:05:29.551 05:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.551 05:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.551 05:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.551 05:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.551 05:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.551 05:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39360232 kB' 'MemAvailable: 44459068 kB' 'Buffers: 3108 kB' 'Cached: 16228688 kB' 'SwapCached: 0 kB' 'Active: 12148048 kB' 'Inactive: 4630024 kB' 'Active(anon): 11582200 kB' 'Inactive(anon): 0 kB' 'Active(file): 565848 kB' 'Inactive(file): 4630024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549552 kB' 'Mapped: 197012 kB' 'Shmem: 11035924 kB' 'KReclaimable: 547400 kB' 'Slab: 938864 kB' 'SReclaimable: 547400 kB' 'SUnreclaim: 391464 kB' 'KernelStack: 12880 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12751540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 18229248 kB' 'DirectMap1G: 49283072 kB' 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.551 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.551 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.552 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.552 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.552 05:01:06 -- setup/common.sh@33 -- # echo 1024 00:05:29.552 05:01:06 -- setup/common.sh@33 -- # return 0 00:05:29.552 05:01:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.552 05:01:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.552 05:01:06 -- setup/hugepages.sh@27 -- # local node 00:05:29.552 05:01:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.552 05:01:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.552 05:01:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.552 05:01:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:29.552 05:01:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:29.552 05:01:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.552 05:01:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.552 05:01:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.552 05:01:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.552 05:01:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.552 05:01:06 -- setup/common.sh@18 -- # local node=0 00:05:29.552 05:01:06 -- setup/common.sh@19 -- # local var val 00:05:29.553 05:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.553 05:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.553 05:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.553 05:01:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.553 05:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.553 05:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22691252 kB' 'MemUsed: 10138632 kB' 'SwapCached: 0 kB' 'Active: 6559668 kB' 'Inactive: 252100 kB' 'Active(anon): 6206780 kB' 'Inactive(anon): 0 kB' 'Active(file): 352888 kB' 'Inactive(file): 252100 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6643692 kB' 'Mapped: 71668 kB' 'AnonPages: 171284 kB' 'Shmem: 6038704 kB' 'KernelStack: 7448 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280928 kB' 'Slab: 504040 kB' 'SReclaimable: 280928 kB' 'SUnreclaim: 223112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.553 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.553 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.554 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.554 05:01:06 -- setup/common.sh@32 -- # continue 00:05:29.554 05:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.554 05:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.554 05:01:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.554 05:01:06 -- setup/common.sh@33 -- # echo 0 00:05:29.554 05:01:06 -- setup/common.sh@33 -- # return 0 00:05:29.554 05:01:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.554 05:01:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.554 05:01:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.554 05:01:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.554 05:01:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.554 node0=1024 expecting 1024 00:05:29.554 05:01:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.554 00:05:29.554 real 0m2.812s 00:05:29.554 user 0m1.169s 00:05:29.554 sys 0m1.562s 00:05:29.554 05:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.554 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.554 ************************************ 00:05:29.554 END TEST no_shrink_alloc 00:05:29.554 ************************************ 00:05:29.812 05:01:06 -- setup/hugepages.sh@217 -- # clear_hp 00:05:29.812 05:01:06 -- setup/hugepages.sh@37 -- # local node hp 00:05:29.812 05:01:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:29.812 05:01:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.812 05:01:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:29.812 05:01:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.812 05:01:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:29.812 05:01:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:29.812 05:01:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.812 05:01:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:29.812 05:01:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.812 05:01:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:29.812 05:01:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:29.812 05:01:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:29.812 00:05:29.812 real 0m11.741s 00:05:29.812 user 0m4.521s 00:05:29.812 sys 0m6.032s 00:05:29.812 05:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.812 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.812 ************************************ 00:05:29.812 END TEST hugepages 00:05:29.812 ************************************ 00:05:29.812 05:01:06 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:29.812 05:01:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.812 05:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.812 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.812 ************************************ 00:05:29.812 START TEST driver 00:05:29.812 ************************************ 00:05:29.812 05:01:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:29.812 * Looking for test storage... 00:05:29.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.812 05:01:07 -- setup/driver.sh@68 -- # setup reset 00:05:29.812 05:01:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.812 05:01:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.366 05:01:09 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:32.366 05:01:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.366 05:01:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.366 05:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.630 ************************************ 00:05:32.630 START TEST guess_driver 00:05:32.630 ************************************ 00:05:32.630 05:01:09 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:32.630 05:01:09 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:32.630 05:01:09 -- setup/driver.sh@47 -- # local fail=0 00:05:32.630 05:01:09 -- setup/driver.sh@49 -- # pick_driver 00:05:32.630 05:01:09 -- setup/driver.sh@36 -- # vfio 00:05:32.630 05:01:09 -- setup/driver.sh@21 -- # local iommu_grups 00:05:32.630 05:01:09 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:32.630 05:01:09 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:32.630 05:01:09 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:32.630 05:01:09 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:32.630 05:01:09 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:32.630 05:01:09 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:32.630 05:01:09 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:32.630 05:01:09 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:32.630 05:01:09 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:32.630 05:01:09 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:32.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:32.630 05:01:09 -- setup/driver.sh@30 -- # return 0 00:05:32.630 05:01:09 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:32.630 05:01:09 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:32.630 05:01:09 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:32.630 05:01:09 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:32.630 Looking for driver=vfio-pci 00:05:32.630 05:01:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.630 05:01:09 -- setup/driver.sh@45 -- # setup output config 00:05:32.630 05:01:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.630 05:01:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.563 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.563 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.563 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.820 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.820 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.820 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.820 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.820 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.820 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.820 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.820 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.820 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.821 05:01:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.821 05:01:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:33.821 05:01:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:34.765 05:01:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:34.765 05:01:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:34.765 05:01:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:34.765 05:01:12 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:34.765 05:01:12 -- setup/driver.sh@65 -- # setup reset 00:05:34.765 05:01:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.765 05:01:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:37.292 00:05:37.292 real 0m4.709s 00:05:37.292 user 0m1.076s 00:05:37.292 sys 0m1.757s 00:05:37.292 05:01:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.292 05:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.292 ************************************ 00:05:37.292 END TEST guess_driver 00:05:37.292 ************************************ 00:05:37.292 00:05:37.292 real 0m7.410s 00:05:37.292 user 0m1.693s 00:05:37.292 sys 0m2.843s 00:05:37.292 05:01:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.292 05:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.292 ************************************ 00:05:37.292 END TEST driver 00:05:37.292 ************************************ 00:05:37.292 05:01:14 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:37.293 05:01:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.293 05:01:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.293 05:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.293 ************************************ 00:05:37.293 START TEST devices 00:05:37.293 ************************************ 00:05:37.293 05:01:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:37.293 * Looking for test storage... 00:05:37.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:37.293 05:01:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:37.293 05:01:14 -- setup/devices.sh@192 -- # setup reset 00:05:37.293 05:01:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.293 05:01:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.196 05:01:15 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:39.196 05:01:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:39.196 05:01:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:39.196 05:01:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:39.196 05:01:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:39.196 05:01:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:39.196 05:01:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:39.196 05:01:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.196 05:01:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:39.196 05:01:15 -- setup/devices.sh@196 -- # blocks=() 00:05:39.196 05:01:15 -- setup/devices.sh@196 -- # declare -a blocks 00:05:39.196 05:01:15 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:39.196 05:01:15 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:39.196 05:01:15 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:39.196 05:01:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.196 05:01:15 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:39.196 05:01:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:39.196 05:01:15 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:39.196 05:01:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:39.196 05:01:15 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:39.196 05:01:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:39.196 05:01:15 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:39.196 No valid GPT data, bailing 00:05:39.196 05:01:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.196 05:01:16 -- scripts/common.sh@391 -- # pt= 00:05:39.196 05:01:16 -- scripts/common.sh@392 -- # return 1 00:05:39.196 05:01:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:39.196 05:01:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:39.196 05:01:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:39.196 05:01:16 -- setup/common.sh@80 -- # echo 1000204886016 00:05:39.196 05:01:16 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:39.196 05:01:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.196 05:01:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:39.196 05:01:16 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:39.196 05:01:16 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:39.196 05:01:16 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:39.196 05:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.196 05:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.196 05:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.196 ************************************ 00:05:39.196 START TEST nvme_mount 00:05:39.196 ************************************ 00:05:39.196 05:01:16 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:39.196 05:01:16 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:39.196 05:01:16 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:39.196 05:01:16 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.196 05:01:16 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.196 05:01:16 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:39.196 05:01:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.196 05:01:16 -- setup/common.sh@40 -- # local part_no=1 00:05:39.196 05:01:16 -- setup/common.sh@41 -- # local size=1073741824 00:05:39.196 05:01:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.196 05:01:16 -- setup/common.sh@44 -- # parts=() 00:05:39.196 05:01:16 -- setup/common.sh@44 -- # local parts 00:05:39.196 05:01:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.196 05:01:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.196 05:01:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.196 05:01:16 -- setup/common.sh@46 -- # (( part++ )) 00:05:39.196 05:01:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.196 05:01:16 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.196 05:01:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.196 05:01:16 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:40.129 Creating new GPT entries in memory. 00:05:40.129 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.129 other utilities. 00:05:40.129 05:01:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.129 05:01:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.129 05:01:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.129 05:01:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.129 05:01:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.062 Creating new GPT entries in memory. 00:05:41.062 The operation has completed successfully. 00:05:41.062 05:01:18 -- setup/common.sh@57 -- # (( part++ )) 00:05:41.062 05:01:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.062 05:01:18 -- setup/common.sh@62 -- # wait 1752499 00:05:41.062 05:01:18 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.062 05:01:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:41.062 05:01:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.062 05:01:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:41.062 05:01:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:41.062 05:01:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.062 05:01:18 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.062 05:01:18 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:41.062 05:01:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:41.062 05:01:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.062 05:01:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.062 05:01:18 -- setup/devices.sh@53 -- # local found=0 00:05:41.062 05:01:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:41.062 05:01:18 -- setup/devices.sh@56 -- # : 00:05:41.062 05:01:18 -- setup/devices.sh@59 -- # local pci status 00:05:41.062 05:01:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.062 05:01:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:41.062 05:01:18 -- setup/devices.sh@47 -- # setup output config 00:05:41.062 05:01:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.062 05:01:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.441 05:01:19 -- setup/devices.sh@63 -- # found=1 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.441 05:01:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.441 05:01:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:42.441 05:01:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.441 05:01:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.441 05:01:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.441 05:01:19 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:42.441 05:01:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.441 05:01:19 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.441 05:01:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.441 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.441 05:01:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.441 05:01:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:42.701 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:42.701 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:42.701 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:42.701 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:42.701 05:01:19 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:42.701 05:01:19 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:42.701 05:01:19 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.701 05:01:19 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:42.701 05:01:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:42.701 05:01:19 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.701 05:01:19 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.701 05:01:19 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:42.701 05:01:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:42.701 05:01:19 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.701 05:01:19 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.701 05:01:19 -- setup/devices.sh@53 -- # local found=0 00:05:42.701 05:01:19 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.701 05:01:19 -- setup/devices.sh@56 -- # : 00:05:42.701 05:01:19 -- setup/devices.sh@59 -- # local pci status 00:05:42.701 05:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.701 05:01:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:42.701 05:01:19 -- setup/devices.sh@47 -- # setup output config 00:05:42.701 05:01:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.701 05:01:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:44.083 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:44.084 05:01:20 -- setup/devices.sh@63 -- # found=1 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.084 05:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.084 05:01:21 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:44.084 05:01:21 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.084 05:01:21 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.084 05:01:21 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:44.084 05:01:21 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.084 05:01:21 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:44.084 05:01:21 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:44.084 05:01:21 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.084 05:01:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:44.084 05:01:21 -- setup/devices.sh@51 -- # local test_file= 00:05:44.084 05:01:21 -- setup/devices.sh@53 -- # local found=0 00:05:44.084 05:01:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.084 05:01:21 -- setup/devices.sh@59 -- # local pci status 00:05:44.084 05:01:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.084 05:01:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:44.084 05:01:21 -- setup/devices.sh@47 -- # setup output config 00:05:44.084 05:01:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.084 05:01:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:45.022 05:01:22 -- setup/devices.sh@63 -- # found=1 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.022 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.022 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.278 05:01:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.278 05:01:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:45.278 05:01:22 -- setup/devices.sh@68 -- # return 0 00:05:45.278 05:01:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:45.278 05:01:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.278 05:01:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.278 05:01:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.278 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.278 00:05:45.278 real 0m6.330s 00:05:45.278 user 0m1.500s 00:05:45.278 sys 0m2.394s 00:05:45.278 05:01:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.278 05:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.278 ************************************ 00:05:45.278 END TEST nvme_mount 00:05:45.278 ************************************ 00:05:45.278 05:01:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:45.278 05:01:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.278 05:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.278 05:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.535 ************************************ 00:05:45.535 START TEST dm_mount 00:05:45.535 ************************************ 00:05:45.535 05:01:22 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:45.535 05:01:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:45.535 05:01:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:45.535 05:01:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:45.535 05:01:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:45.535 05:01:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.535 05:01:22 -- setup/common.sh@40 -- # local part_no=2 00:05:45.535 05:01:22 -- setup/common.sh@41 -- # local size=1073741824 00:05:45.535 05:01:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.535 05:01:22 -- setup/common.sh@44 -- # parts=() 00:05:45.535 05:01:22 -- setup/common.sh@44 -- # local parts 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.535 05:01:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.535 05:01:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:45.535 05:01:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.536 05:01:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:45.536 05:01:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.536 05:01:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:46.470 Creating new GPT entries in memory. 00:05:46.470 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.470 other utilities. 00:05:46.470 05:01:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.470 05:01:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.470 05:01:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.470 05:01:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.470 05:01:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:47.409 Creating new GPT entries in memory. 00:05:47.409 The operation has completed successfully. 00:05:47.409 05:01:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:47.409 05:01:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.409 05:01:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.409 05:01:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.409 05:01:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:48.797 The operation has completed successfully. 00:05:48.797 05:01:25 -- setup/common.sh@57 -- # (( part++ )) 00:05:48.797 05:01:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.797 05:01:25 -- setup/common.sh@62 -- # wait 1754897 00:05:48.797 05:01:25 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.797 05:01:25 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:48.797 05:01:25 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.797 05:01:25 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.797 05:01:25 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.797 05:01:25 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.797 05:01:25 -- setup/devices.sh@161 -- # break 00:05:48.797 05:01:25 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.797 05:01:25 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.797 05:01:25 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.797 05:01:25 -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.797 05:01:25 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:48.797 05:01:25 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:48.797 05:01:25 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:48.797 05:01:25 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:48.797 05:01:25 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:48.797 05:01:25 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.797 05:01:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.797 05:01:25 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:48.797 05:01:25 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.797 05:01:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:48.797 05:01:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.797 05:01:25 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:48.797 05:01:25 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.797 05:01:25 -- setup/devices.sh@53 -- # local found=0 00:05:48.797 05:01:25 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.797 05:01:25 -- setup/devices.sh@56 -- # : 00:05:48.797 05:01:25 -- setup/devices.sh@59 -- # local pci status 00:05:48.797 05:01:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.797 05:01:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:48.797 05:01:25 -- setup/devices.sh@47 -- # setup output config 00:05:48.797 05:01:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.797 05:01:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:49.733 05:01:26 -- setup/devices.sh@63 -- # found=1 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.733 05:01:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:49.733 05:01:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.733 05:01:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.733 05:01:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.733 05:01:26 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.733 05:01:26 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:49.733 05:01:26 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:49.733 05:01:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:49.733 05:01:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:49.733 05:01:26 -- setup/devices.sh@51 -- # local test_file= 00:05:49.733 05:01:26 -- setup/devices.sh@53 -- # local found=0 00:05:49.733 05:01:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.733 05:01:26 -- setup/devices.sh@59 -- # local pci status 00:05:49.733 05:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.733 05:01:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:49.733 05:01:26 -- setup/devices.sh@47 -- # setup output config 00:05:49.733 05:01:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.733 05:01:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:51.108 05:01:27 -- setup/devices.sh@63 -- # found=1 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:51.108 05:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.108 05:01:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.108 05:01:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.108 05:01:28 -- setup/devices.sh@68 -- # return 0 00:05:51.108 05:01:28 -- setup/devices.sh@187 -- # cleanup_dm 00:05:51.109 05:01:28 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.109 05:01:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.109 05:01:28 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:51.109 05:01:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.109 05:01:28 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:51.109 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.109 05:01:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.109 05:01:28 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:51.109 00:05:51.109 real 0m5.610s 00:05:51.109 user 0m0.905s 00:05:51.109 sys 0m1.548s 00:05:51.109 05:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.109 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 ************************************ 00:05:51.109 END TEST dm_mount 00:05:51.109 ************************************ 00:05:51.109 05:01:28 -- setup/devices.sh@1 -- # cleanup 00:05:51.109 05:01:28 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:51.109 05:01:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.109 05:01:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.109 05:01:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:51.109 05:01:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.109 05:01:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.367 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:51.367 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:51.367 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:51.367 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:51.367 05:01:28 -- setup/devices.sh@12 -- # cleanup_dm 00:05:51.367 05:01:28 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.367 05:01:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.367 05:01:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.367 05:01:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.367 05:01:28 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.367 05:01:28 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:51.367 00:05:51.367 real 0m14.004s 00:05:51.367 user 0m3.131s 00:05:51.367 sys 0m5.026s 00:05:51.367 05:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.367 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:51.367 ************************************ 00:05:51.367 END TEST devices 00:05:51.367 ************************************ 00:05:51.367 00:05:51.367 real 0m44.210s 00:05:51.367 user 0m12.682s 00:05:51.367 sys 0m19.559s 00:05:51.367 05:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.367 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:51.367 ************************************ 00:05:51.367 END TEST setup.sh 00:05:51.367 ************************************ 00:05:51.367 05:01:28 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:52.738 Hugepages 00:05:52.738 node hugesize free / total 00:05:52.738 node0 1048576kB 0 / 0 00:05:52.738 node0 2048kB 2048 / 2048 00:05:52.738 node1 1048576kB 0 / 0 00:05:52.738 node1 2048kB 0 / 0 00:05:52.738 00:05:52.738 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:52.738 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:52.738 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:52.738 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:52.738 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:52.738 05:01:29 -- spdk/autotest.sh@130 -- # uname -s 00:05:52.738 05:01:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:52.738 05:01:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:52.738 05:01:29 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:53.670 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:53.670 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:53.670 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:53.670 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:53.928 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:54.861 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:54.861 05:01:32 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:55.796 05:01:33 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:55.796 05:01:33 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:55.796 05:01:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:55.796 05:01:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:55.796 05:01:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:55.796 05:01:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:55.796 05:01:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.796 05:01:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:55.796 05:01:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:56.054 05:01:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:56.054 05:01:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:05:56.054 05:01:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:56.998 Waiting for block devices as requested 00:05:56.998 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:57.259 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:57.259 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:57.517 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:57.517 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:57.517 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:57.517 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:57.775 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:57.775 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:57.775 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:57.775 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:58.034 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:58.034 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:58.034 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:58.034 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:58.291 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:58.291 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:58.291 05:01:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:58.291 05:01:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:05:58.291 05:01:35 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:58.291 05:01:35 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:58.291 05:01:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:58.291 05:01:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:58.291 05:01:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:58.291 05:01:35 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:58.291 05:01:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:58.549 05:01:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:58.549 05:01:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:58.549 05:01:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:58.549 05:01:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:58.549 05:01:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:58.549 05:01:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:58.549 05:01:35 -- common/autotest_common.sh@1543 -- # continue 00:05:58.549 05:01:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:58.549 05:01:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:58.549 05:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:58.549 05:01:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:58.549 05:01:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:58.549 05:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:58.549 05:01:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:59.485 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:59.485 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:59.485 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:59.485 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:59.485 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:59.485 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:59.744 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:59.744 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:59.744 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:00.677 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:00.677 05:01:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:00.677 05:01:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:00.677 05:01:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.677 05:01:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:00.677 05:01:37 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:06:00.677 05:01:37 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:06:00.677 05:01:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:00.677 05:01:37 -- common/autotest_common.sh@1563 -- # local bdfs 00:06:00.677 05:01:37 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:06:00.677 05:01:37 -- common/autotest_common.sh@1499 -- # bdfs=() 00:06:00.677 05:01:37 -- common/autotest_common.sh@1499 -- # local bdfs 00:06:00.677 05:01:37 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:00.677 05:01:37 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:00.677 05:01:37 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:06:00.936 05:01:37 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:06:00.936 05:01:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:06:00.936 05:01:37 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:06:00.936 05:01:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:06:00.936 05:01:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:00.936 05:01:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:00.936 05:01:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:00.936 05:01:37 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:06:00.936 05:01:37 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:06:00.936 05:01:37 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1760073 00:06:00.936 05:01:37 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.936 05:01:37 -- common/autotest_common.sh@1584 -- # waitforlisten 1760073 00:06:00.936 05:01:37 -- common/autotest_common.sh@817 -- # '[' -z 1760073 ']' 00:06:00.936 05:01:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.936 05:01:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.936 05:01:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.936 05:01:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.936 05:01:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.936 [2024-04-24 05:01:38.033218] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:00.936 [2024-04-24 05:01:38.033296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760073 ] 00:06:00.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.936 [2024-04-24 05:01:38.065520] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.936 [2024-04-24 05:01:38.095432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.936 [2024-04-24 05:01:38.182417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.194 05:01:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.194 05:01:38 -- common/autotest_common.sh@850 -- # return 0 00:06:01.194 05:01:38 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:06:01.194 05:01:38 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:06:01.194 05:01:38 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:04.475 nvme0n1 00:06:04.475 05:01:41 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:04.475 [2024-04-24 05:01:41.733222] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:04.475 [2024-04-24 05:01:41.733271] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:04.475 request: 00:06:04.475 { 00:06:04.475 "nvme_ctrlr_name": "nvme0", 00:06:04.475 "password": "test", 00:06:04.475 "method": "bdev_nvme_opal_revert", 00:06:04.475 "req_id": 1 00:06:04.475 } 00:06:04.475 Got JSON-RPC error response 00:06:04.475 response: 00:06:04.475 { 00:06:04.475 "code": -32603, 00:06:04.475 "message": "Internal error" 00:06:04.475 } 00:06:04.734 05:01:41 -- common/autotest_common.sh@1590 -- # true 00:06:04.734 05:01:41 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:06:04.734 05:01:41 -- common/autotest_common.sh@1594 -- # killprocess 1760073 00:06:04.734 05:01:41 -- common/autotest_common.sh@936 -- # '[' -z 1760073 ']' 00:06:04.734 05:01:41 -- common/autotest_common.sh@940 -- # kill -0 1760073 00:06:04.734 05:01:41 -- common/autotest_common.sh@941 -- # uname 00:06:04.734 05:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.734 05:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1760073 00:06:04.734 05:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.734 05:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.734 05:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1760073' 00:06:04.734 killing process with pid 1760073 00:06:04.734 05:01:41 -- common/autotest_common.sh@955 -- # kill 1760073 00:06:04.734 05:01:41 -- common/autotest_common.sh@960 -- # wait 1760073 00:06:06.638 05:01:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:06.638 05:01:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:06.638 05:01:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.638 05:01:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.638 05:01:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:06.638 05:01:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:06.638 05:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.638 05:01:43 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:06.638 05:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.638 05:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.638 05:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.638 ************************************ 00:06:06.638 START TEST env 00:06:06.638 ************************************ 00:06:06.638 05:01:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:06.638 * Looking for test storage... 00:06:06.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:06.638 05:01:43 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.638 05:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.638 05:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.638 05:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.638 ************************************ 00:06:06.638 START TEST env_memory 00:06:06.638 ************************************ 00:06:06.638 05:01:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.638 00:06:06.638 00:06:06.638 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.638 http://cunit.sourceforge.net/ 00:06:06.638 00:06:06.638 00:06:06.638 Suite: memory 00:06:06.638 Test: alloc and free memory map ...[2024-04-24 05:01:43.807795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:06.638 passed 00:06:06.638 Test: mem map translation ...[2024-04-24 05:01:43.829999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:06.638 [2024-04-24 05:01:43.830024] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:06.638 [2024-04-24 05:01:43.830080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:06.638 [2024-04-24 05:01:43.830093] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:06.638 passed 00:06:06.638 Test: mem map registration ...[2024-04-24 05:01:43.872372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:06.638 [2024-04-24 05:01:43.872391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:06.638 passed 00:06:06.897 Test: mem map adjacent registrations ...passed 00:06:06.897 00:06:06.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.897 suites 1 1 n/a 0 0 00:06:06.897 tests 4 4 4 0 0 00:06:06.897 asserts 152 152 152 0 n/a 00:06:06.897 00:06:06.897 Elapsed time = 0.149 seconds 00:06:06.897 00:06:06.897 real 0m0.157s 00:06:06.897 user 0m0.150s 00:06:06.897 sys 0m0.006s 00:06:06.897 05:01:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.897 05:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.897 ************************************ 00:06:06.897 END TEST env_memory 00:06:06.897 ************************************ 00:06:06.897 05:01:43 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.897 05:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.897 05:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.897 05:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.897 ************************************ 00:06:06.897 START TEST env_vtophys 00:06:06.897 ************************************ 00:06:06.897 05:01:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.897 EAL: lib.eal log level changed from notice to debug 00:06:06.897 EAL: Detected lcore 0 as core 0 on socket 0 00:06:06.897 EAL: Detected lcore 1 as core 1 on socket 0 00:06:06.897 EAL: Detected lcore 2 as core 2 on socket 0 00:06:06.897 EAL: Detected lcore 3 as core 3 on socket 0 00:06:06.897 EAL: Detected lcore 4 as core 4 on socket 0 00:06:06.897 EAL: Detected lcore 5 as core 5 on socket 0 00:06:06.897 EAL: Detected lcore 6 as core 8 on socket 0 00:06:06.897 EAL: Detected lcore 7 as core 9 on socket 0 00:06:06.897 EAL: Detected lcore 8 as core 10 on socket 0 00:06:06.897 EAL: Detected lcore 9 as core 11 on socket 0 00:06:06.897 EAL: Detected lcore 10 as core 12 on socket 0 00:06:06.897 EAL: Detected lcore 11 as core 13 on socket 0 00:06:06.897 EAL: Detected lcore 12 as core 0 on socket 1 00:06:06.897 EAL: Detected lcore 13 as core 1 on socket 1 00:06:06.897 EAL: Detected lcore 14 as core 2 on socket 1 00:06:06.897 EAL: Detected lcore 15 as core 3 on socket 1 00:06:06.897 EAL: Detected lcore 16 as core 4 on socket 1 00:06:06.897 EAL: Detected lcore 17 as core 5 on socket 1 00:06:06.897 EAL: Detected lcore 18 as core 8 on socket 1 00:06:06.897 EAL: Detected lcore 19 as core 9 on socket 1 00:06:06.897 EAL: Detected lcore 20 as core 10 on socket 1 00:06:06.897 EAL: Detected lcore 21 as core 11 on socket 1 00:06:06.897 EAL: Detected lcore 22 as core 12 on socket 1 00:06:06.897 EAL: Detected lcore 23 as core 13 on socket 1 00:06:06.897 EAL: Detected lcore 24 as core 0 on socket 0 00:06:06.897 EAL: Detected lcore 25 as core 1 on socket 0 00:06:06.897 EAL: Detected lcore 26 as core 2 on socket 0 00:06:06.897 EAL: Detected lcore 27 as core 3 on socket 0 00:06:06.897 EAL: Detected lcore 28 as core 4 on socket 0 00:06:06.897 EAL: Detected lcore 29 as core 5 on socket 0 00:06:06.897 EAL: Detected lcore 30 as core 8 on socket 0 00:06:06.897 EAL: Detected lcore 31 as core 9 on socket 0 00:06:06.897 EAL: Detected lcore 32 as core 10 on socket 0 00:06:06.897 EAL: Detected lcore 33 as core 11 on socket 0 00:06:06.897 EAL: Detected lcore 34 as core 12 on socket 0 00:06:06.897 EAL: Detected lcore 35 as core 13 on socket 0 00:06:06.897 EAL: Detected lcore 36 as core 0 on socket 1 00:06:06.897 EAL: Detected lcore 37 as core 1 on socket 1 00:06:06.897 EAL: Detected lcore 38 as core 2 on socket 1 00:06:06.897 EAL: Detected lcore 39 as core 3 on socket 1 00:06:06.897 EAL: Detected lcore 40 as core 4 on socket 1 00:06:06.897 EAL: Detected lcore 41 as core 5 on socket 1 00:06:06.897 EAL: Detected lcore 42 as core 8 on socket 1 00:06:06.897 EAL: Detected lcore 43 as core 9 on socket 1 00:06:06.897 EAL: Detected lcore 44 as core 10 on socket 1 00:06:06.897 EAL: Detected lcore 45 as core 11 on socket 1 00:06:06.897 EAL: Detected lcore 46 as core 12 on socket 1 00:06:06.897 EAL: Detected lcore 47 as core 13 on socket 1 00:06:06.897 EAL: Maximum logical cores by configuration: 128 00:06:06.897 EAL: Detected CPU lcores: 48 00:06:06.897 EAL: Detected NUMA nodes: 2 00:06:06.897 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:06.897 EAL: Detected shared linkage of DPDK 00:06:06.897 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:06.897 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:06.897 EAL: Registered [vdev] bus. 00:06:06.897 EAL: bus.vdev log level changed from disabled to notice 00:06:06.897 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:06.898 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:06.898 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:06.898 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:06.898 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:06.898 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:06.898 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:06.898 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:06.898 EAL: No shared files mode enabled, IPC will be disabled 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Bus pci wants IOVA as 'DC' 00:06:06.898 EAL: Bus vdev wants IOVA as 'DC' 00:06:06.898 EAL: Buses did not request a specific IOVA mode. 00:06:06.898 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:06.898 EAL: Selected IOVA mode 'VA' 00:06:06.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.898 EAL: Probing VFIO support... 00:06:06.898 EAL: IOMMU type 1 (Type 1) is supported 00:06:06.898 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:06.898 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:06.898 EAL: VFIO support initialized 00:06:06.898 EAL: Ask a virtual area of 0x2e000 bytes 00:06:06.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:06.898 EAL: Setting up physically contiguous memory... 00:06:06.898 EAL: Setting maximum number of open files to 524288 00:06:06.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:06.898 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:06.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:06.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:06.898 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.898 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:06.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.898 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.898 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:06.898 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:06.898 EAL: Hugepages will be freed exactly as allocated. 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: TSC frequency is ~2700000 KHz 00:06:06.898 EAL: Main lcore 0 is ready (tid=7f7699e4ea00;cpuset=[0]) 00:06:06.898 EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 0 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 2MB 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:06.898 EAL: Mem event callback 'spdk:(nil)' registered 00:06:06.898 00:06:06.898 00:06:06.898 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.898 http://cunit.sourceforge.net/ 00:06:06.898 00:06:06.898 00:06:06.898 Suite: components_suite 00:06:06.898 Test: vtophys_malloc_test ...passed 00:06:06.898 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 4 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 4MB 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was shrunk by 4MB 00:06:06.898 EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 4 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 6MB 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was shrunk by 6MB 00:06:06.898 EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 4 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 10MB 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was shrunk by 10MB 00:06:06.898 EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 4 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 18MB 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was shrunk by 18MB 00:06:06.898 EAL: Trying to obtain current memory policy. 00:06:06.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.898 EAL: Restoring previous memory policy: 4 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.898 EAL: request: mp_malloc_sync 00:06:06.898 EAL: No shared files mode enabled, IPC is disabled 00:06:06.898 EAL: Heap on socket 0 was expanded by 34MB 00:06:06.898 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.899 EAL: request: mp_malloc_sync 00:06:06.899 EAL: No shared files mode enabled, IPC is disabled 00:06:06.899 EAL: Heap on socket 0 was shrunk by 34MB 00:06:06.899 EAL: Trying to obtain current memory policy. 00:06:06.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.899 EAL: Restoring previous memory policy: 4 00:06:06.899 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.899 EAL: request: mp_malloc_sync 00:06:06.899 EAL: No shared files mode enabled, IPC is disabled 00:06:06.899 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.156 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.156 EAL: request: mp_malloc_sync 00:06:07.156 EAL: No shared files mode enabled, IPC is disabled 00:06:07.157 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.157 EAL: Trying to obtain current memory policy. 00:06:07.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.157 EAL: Restoring previous memory policy: 4 00:06:07.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.157 EAL: request: mp_malloc_sync 00:06:07.157 EAL: No shared files mode enabled, IPC is disabled 00:06:07.157 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.157 EAL: request: mp_malloc_sync 00:06:07.157 EAL: No shared files mode enabled, IPC is disabled 00:06:07.157 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.157 EAL: Trying to obtain current memory policy. 00:06:07.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.157 EAL: Restoring previous memory policy: 4 00:06:07.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.157 EAL: request: mp_malloc_sync 00:06:07.157 EAL: No shared files mode enabled, IPC is disabled 00:06:07.157 EAL: Heap on socket 0 was expanded by 258MB 00:06:07.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.414 EAL: request: mp_malloc_sync 00:06:07.414 EAL: No shared files mode enabled, IPC is disabled 00:06:07.414 EAL: Heap on socket 0 was shrunk by 258MB 00:06:07.414 EAL: Trying to obtain current memory policy. 00:06:07.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.414 EAL: Restoring previous memory policy: 4 00:06:07.414 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.414 EAL: request: mp_malloc_sync 00:06:07.414 EAL: No shared files mode enabled, IPC is disabled 00:06:07.414 EAL: Heap on socket 0 was expanded by 514MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 514MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.930 EAL: Restoring previous memory policy: 4 00:06:07.930 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.930 EAL: request: mp_malloc_sync 00:06:07.930 EAL: No shared files mode enabled, IPC is disabled 00:06:07.930 EAL: Heap on socket 0 was expanded by 1026MB 00:06:08.188 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.446 EAL: request: mp_malloc_sync 00:06:08.446 EAL: No shared files mode enabled, IPC is disabled 00:06:08.446 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:08.446 passed 00:06:08.446 00:06:08.446 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.446 suites 1 1 n/a 0 0 00:06:08.446 tests 2 2 2 0 0 00:06:08.446 asserts 497 497 497 0 n/a 00:06:08.446 00:06:08.446 Elapsed time = 1.387 seconds 00:06:08.446 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.446 EAL: request: mp_malloc_sync 00:06:08.446 EAL: No shared files mode enabled, IPC is disabled 00:06:08.446 EAL: Heap on socket 0 was shrunk by 2MB 00:06:08.446 EAL: No shared files mode enabled, IPC is disabled 00:06:08.446 EAL: No shared files mode enabled, IPC is disabled 00:06:08.446 EAL: No shared files mode enabled, IPC is disabled 00:06:08.446 00:06:08.446 real 0m1.511s 00:06:08.446 user 0m0.859s 00:06:08.446 sys 0m0.611s 00:06:08.446 05:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.446 05:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST env_vtophys 00:06:08.446 ************************************ 00:06:08.446 05:01:45 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:08.446 05:01:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.446 05:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.446 05:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 START TEST env_pci 00:06:08.446 ************************************ 00:06:08.446 05:01:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:08.446 00:06:08.446 00:06:08.446 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.446 http://cunit.sourceforge.net/ 00:06:08.446 00:06:08.446 00:06:08.446 Suite: pci 00:06:08.446 Test: pci_hook ...[2024-04-24 05:01:45.671482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1761042 has claimed it 00:06:08.446 EAL: Cannot find device (10000:00:01.0) 00:06:08.446 EAL: Failed to attach device on primary process 00:06:08.446 passed 00:06:08.446 00:06:08.446 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.446 suites 1 1 n/a 0 0 00:06:08.446 tests 1 1 1 0 0 00:06:08.446 asserts 25 25 25 0 n/a 00:06:08.446 00:06:08.446 Elapsed time = 0.018 seconds 00:06:08.446 00:06:08.446 real 0m0.029s 00:06:08.446 user 0m0.007s 00:06:08.446 sys 0m0.021s 00:06:08.446 05:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.446 05:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST env_pci 00:06:08.446 ************************************ 00:06:08.446 05:01:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:08.446 05:01:45 -- env/env.sh@15 -- # uname 00:06:08.446 05:01:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:08.446 05:01:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:08.446 05:01:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.446 05:01:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:08.446 05:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.446 05:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.705 ************************************ 00:06:08.705 START TEST env_dpdk_post_init 00:06:08.705 ************************************ 00:06:08.705 05:01:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.705 EAL: Detected CPU lcores: 48 00:06:08.705 EAL: Detected NUMA nodes: 2 00:06:08.705 EAL: Detected shared linkage of DPDK 00:06:08.705 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.705 EAL: Selected IOVA mode 'VA' 00:06:08.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.705 EAL: VFIO support initialized 00:06:08.705 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.705 EAL: Using IOMMU type 1 (Type 1) 00:06:08.705 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:08.705 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:08.705 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:08.705 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:08.705 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:08.963 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:09.894 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:13.205 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:13.205 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:13.205 Starting DPDK initialization... 00:06:13.205 Starting SPDK post initialization... 00:06:13.205 SPDK NVMe probe 00:06:13.205 Attaching to 0000:88:00.0 00:06:13.205 Attached to 0000:88:00.0 00:06:13.205 Cleaning up... 00:06:13.205 00:06:13.205 real 0m4.387s 00:06:13.205 user 0m3.239s 00:06:13.205 sys 0m0.203s 00:06:13.205 05:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.205 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.205 ************************************ 00:06:13.205 END TEST env_dpdk_post_init 00:06:13.205 ************************************ 00:06:13.205 05:01:50 -- env/env.sh@26 -- # uname 00:06:13.205 05:01:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:13.205 05:01:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.205 05:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.205 05:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.205 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.205 ************************************ 00:06:13.205 START TEST env_mem_callbacks 00:06:13.205 ************************************ 00:06:13.205 05:01:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.205 EAL: Detected CPU lcores: 48 00:06:13.205 EAL: Detected NUMA nodes: 2 00:06:13.205 EAL: Detected shared linkage of DPDK 00:06:13.205 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.205 EAL: Selected IOVA mode 'VA' 00:06:13.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.205 EAL: VFIO support initialized 00:06:13.205 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.205 00:06:13.205 00:06:13.205 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.205 http://cunit.sourceforge.net/ 00:06:13.205 00:06:13.205 00:06:13.205 Suite: memory 00:06:13.205 Test: test ... 00:06:13.205 register 0x200000200000 2097152 00:06:13.205 malloc 3145728 00:06:13.205 register 0x200000400000 4194304 00:06:13.205 buf 0x200000500000 len 3145728 PASSED 00:06:13.205 malloc 64 00:06:13.205 buf 0x2000004fff40 len 64 PASSED 00:06:13.205 malloc 4194304 00:06:13.205 register 0x200000800000 6291456 00:06:13.205 buf 0x200000a00000 len 4194304 PASSED 00:06:13.205 free 0x200000500000 3145728 00:06:13.205 free 0x2000004fff40 64 00:06:13.205 unregister 0x200000400000 4194304 PASSED 00:06:13.205 free 0x200000a00000 4194304 00:06:13.205 unregister 0x200000800000 6291456 PASSED 00:06:13.205 malloc 8388608 00:06:13.205 register 0x200000400000 10485760 00:06:13.205 buf 0x200000600000 len 8388608 PASSED 00:06:13.205 free 0x200000600000 8388608 00:06:13.205 unregister 0x200000400000 10485760 PASSED 00:06:13.205 passed 00:06:13.205 00:06:13.205 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.205 suites 1 1 n/a 0 0 00:06:13.205 tests 1 1 1 0 0 00:06:13.205 asserts 15 15 15 0 n/a 00:06:13.205 00:06:13.205 Elapsed time = 0.005 seconds 00:06:13.205 00:06:13.205 real 0m0.049s 00:06:13.205 user 0m0.014s 00:06:13.205 sys 0m0.035s 00:06:13.205 05:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.205 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.205 ************************************ 00:06:13.205 END TEST env_mem_callbacks 00:06:13.205 ************************************ 00:06:13.205 00:06:13.205 real 0m6.751s 00:06:13.205 user 0m4.489s 00:06:13.205 sys 0m1.243s 00:06:13.205 05:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.205 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.205 ************************************ 00:06:13.205 END TEST env 00:06:13.205 ************************************ 00:06:13.205 05:01:50 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.205 05:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.205 05:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.205 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.480 ************************************ 00:06:13.480 START TEST rpc 00:06:13.480 ************************************ 00:06:13.480 05:01:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.480 * Looking for test storage... 00:06:13.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:13.480 05:01:50 -- rpc/rpc.sh@65 -- # spdk_pid=1761789 00:06:13.480 05:01:50 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:13.480 05:01:50 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.480 05:01:50 -- rpc/rpc.sh@67 -- # waitforlisten 1761789 00:06:13.480 05:01:50 -- common/autotest_common.sh@817 -- # '[' -z 1761789 ']' 00:06:13.480 05:01:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.480 05:01:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.480 05:01:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.480 05:01:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.480 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.480 [2024-04-24 05:01:50.602408] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:13.481 [2024-04-24 05:01:50.602506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761789 ] 00:06:13.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.481 [2024-04-24 05:01:50.633370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.481 [2024-04-24 05:01:50.660036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.481 [2024-04-24 05:01:50.744426] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:13.481 [2024-04-24 05:01:50.744484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1761789' to capture a snapshot of events at runtime. 00:06:13.481 [2024-04-24 05:01:50.744508] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.481 [2024-04-24 05:01:50.744518] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.481 [2024-04-24 05:01:50.744528] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1761789 for offline analysis/debug. 00:06:13.481 [2024-04-24 05:01:50.744571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.739 05:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.739 05:01:51 -- common/autotest_common.sh@850 -- # return 0 00:06:13.739 05:01:51 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:13.739 05:01:51 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:13.739 05:01:51 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:13.739 05:01:51 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:13.739 05:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.739 05:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.739 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 ************************************ 00:06:13.997 START TEST rpc_integrity 00:06:13.997 ************************************ 00:06:13.997 05:01:51 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:13.997 05:01:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:13.997 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.997 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.997 05:01:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:13.997 05:01:51 -- rpc/rpc.sh@13 -- # jq length 00:06:13.997 05:01:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:13.997 05:01:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:13.997 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.997 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.997 05:01:51 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:13.997 05:01:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:13.997 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.997 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.997 05:01:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:13.997 { 00:06:13.997 "name": "Malloc0", 00:06:13.997 "aliases": [ 00:06:13.997 "7c19edc5-d12f-445d-8f63-ec590e64422a" 00:06:13.997 ], 00:06:13.997 "product_name": "Malloc disk", 00:06:13.997 "block_size": 512, 00:06:13.997 "num_blocks": 16384, 00:06:13.997 "uuid": "7c19edc5-d12f-445d-8f63-ec590e64422a", 00:06:13.997 "assigned_rate_limits": { 00:06:13.997 "rw_ios_per_sec": 0, 00:06:13.997 "rw_mbytes_per_sec": 0, 00:06:13.997 "r_mbytes_per_sec": 0, 00:06:13.997 "w_mbytes_per_sec": 0 00:06:13.997 }, 00:06:13.997 "claimed": false, 00:06:13.997 "zoned": false, 00:06:13.997 "supported_io_types": { 00:06:13.997 "read": true, 00:06:13.997 "write": true, 00:06:13.997 "unmap": true, 00:06:13.997 "write_zeroes": true, 00:06:13.997 "flush": true, 00:06:13.997 "reset": true, 00:06:13.997 "compare": false, 00:06:13.997 "compare_and_write": false, 00:06:13.997 "abort": true, 00:06:13.997 "nvme_admin": false, 00:06:13.997 "nvme_io": false 00:06:13.997 }, 00:06:13.997 "memory_domains": [ 00:06:13.997 { 00:06:13.997 "dma_device_id": "system", 00:06:13.997 "dma_device_type": 1 00:06:13.997 }, 00:06:13.997 { 00:06:13.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.997 "dma_device_type": 2 00:06:13.997 } 00:06:13.997 ], 00:06:13.997 "driver_specific": {} 00:06:13.997 } 00:06:13.997 ]' 00:06:13.997 05:01:51 -- rpc/rpc.sh@17 -- # jq length 00:06:13.997 05:01:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:13.997 05:01:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:13.997 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.997 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 [2024-04-24 05:01:51.217173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:13.997 [2024-04-24 05:01:51.217222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.997 [2024-04-24 05:01:51.217246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ab3c0 00:06:13.997 [2024-04-24 05:01:51.217261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.997 [2024-04-24 05:01:51.218801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.997 [2024-04-24 05:01:51.218827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:13.997 Passthru0 00:06:13.997 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.997 05:01:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:13.997 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.997 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.997 05:01:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:13.997 { 00:06:13.997 "name": "Malloc0", 00:06:13.997 "aliases": [ 00:06:13.997 "7c19edc5-d12f-445d-8f63-ec590e64422a" 00:06:13.997 ], 00:06:13.997 "product_name": "Malloc disk", 00:06:13.997 "block_size": 512, 00:06:13.997 "num_blocks": 16384, 00:06:13.997 "uuid": "7c19edc5-d12f-445d-8f63-ec590e64422a", 00:06:13.997 "assigned_rate_limits": { 00:06:13.997 "rw_ios_per_sec": 0, 00:06:13.997 "rw_mbytes_per_sec": 0, 00:06:13.997 "r_mbytes_per_sec": 0, 00:06:13.997 "w_mbytes_per_sec": 0 00:06:13.997 }, 00:06:13.997 "claimed": true, 00:06:13.997 "claim_type": "exclusive_write", 00:06:13.997 "zoned": false, 00:06:13.997 "supported_io_types": { 00:06:13.997 "read": true, 00:06:13.997 "write": true, 00:06:13.997 "unmap": true, 00:06:13.997 "write_zeroes": true, 00:06:13.997 "flush": true, 00:06:13.997 "reset": true, 00:06:13.997 "compare": false, 00:06:13.997 "compare_and_write": false, 00:06:13.997 "abort": true, 00:06:13.997 "nvme_admin": false, 00:06:13.997 "nvme_io": false 00:06:13.997 }, 00:06:13.997 "memory_domains": [ 00:06:13.997 { 00:06:13.997 "dma_device_id": "system", 00:06:13.997 "dma_device_type": 1 00:06:13.997 }, 00:06:13.997 { 00:06:13.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.997 "dma_device_type": 2 00:06:13.997 } 00:06:13.997 ], 00:06:13.997 "driver_specific": {} 00:06:13.997 }, 00:06:13.997 { 00:06:13.997 "name": "Passthru0", 00:06:13.997 "aliases": [ 00:06:13.997 "fb858742-98ce-5cc2-9a23-38006d83a61f" 00:06:13.997 ], 00:06:13.997 "product_name": "passthru", 00:06:13.998 "block_size": 512, 00:06:13.998 "num_blocks": 16384, 00:06:13.998 "uuid": "fb858742-98ce-5cc2-9a23-38006d83a61f", 00:06:13.998 "assigned_rate_limits": { 00:06:13.998 "rw_ios_per_sec": 0, 00:06:13.998 "rw_mbytes_per_sec": 0, 00:06:13.998 "r_mbytes_per_sec": 0, 00:06:13.998 "w_mbytes_per_sec": 0 00:06:13.998 }, 00:06:13.998 "claimed": false, 00:06:13.998 "zoned": false, 00:06:13.998 "supported_io_types": { 00:06:13.998 "read": true, 00:06:13.998 "write": true, 00:06:13.998 "unmap": true, 00:06:13.998 "write_zeroes": true, 00:06:13.998 "flush": true, 00:06:13.998 "reset": true, 00:06:13.998 "compare": false, 00:06:13.998 "compare_and_write": false, 00:06:13.998 "abort": true, 00:06:13.998 "nvme_admin": false, 00:06:13.998 "nvme_io": false 00:06:13.998 }, 00:06:13.998 "memory_domains": [ 00:06:13.998 { 00:06:13.998 "dma_device_id": "system", 00:06:13.998 "dma_device_type": 1 00:06:13.998 }, 00:06:13.998 { 00:06:13.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.998 "dma_device_type": 2 00:06:13.998 } 00:06:13.998 ], 00:06:13.998 "driver_specific": { 00:06:13.998 "passthru": { 00:06:13.998 "name": "Passthru0", 00:06:13.998 "base_bdev_name": "Malloc0" 00:06:13.998 } 00:06:13.998 } 00:06:13.998 } 00:06:13.998 ]' 00:06:13.998 05:01:51 -- rpc/rpc.sh@21 -- # jq length 00:06:14.256 05:01:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:14.256 05:01:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:14.256 05:01:51 -- rpc/rpc.sh@26 -- # jq length 00:06:14.256 05:01:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:14.256 00:06:14.256 real 0m0.230s 00:06:14.256 user 0m0.152s 00:06:14.256 sys 0m0.020s 00:06:14.256 05:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 ************************************ 00:06:14.256 END TEST rpc_integrity 00:06:14.256 ************************************ 00:06:14.256 05:01:51 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:14.256 05:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.256 05:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 ************************************ 00:06:14.256 START TEST rpc_plugins 00:06:14.256 ************************************ 00:06:14.256 05:01:51 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:06:14.256 05:01:51 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:14.256 05:01:51 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:14.256 { 00:06:14.256 "name": "Malloc1", 00:06:14.256 "aliases": [ 00:06:14.256 "b2c9ea09-848a-413f-bbdb-317547e59d0c" 00:06:14.256 ], 00:06:14.256 "product_name": "Malloc disk", 00:06:14.256 "block_size": 4096, 00:06:14.256 "num_blocks": 256, 00:06:14.256 "uuid": "b2c9ea09-848a-413f-bbdb-317547e59d0c", 00:06:14.256 "assigned_rate_limits": { 00:06:14.256 "rw_ios_per_sec": 0, 00:06:14.256 "rw_mbytes_per_sec": 0, 00:06:14.256 "r_mbytes_per_sec": 0, 00:06:14.256 "w_mbytes_per_sec": 0 00:06:14.256 }, 00:06:14.256 "claimed": false, 00:06:14.256 "zoned": false, 00:06:14.256 "supported_io_types": { 00:06:14.256 "read": true, 00:06:14.256 "write": true, 00:06:14.256 "unmap": true, 00:06:14.256 "write_zeroes": true, 00:06:14.256 "flush": true, 00:06:14.256 "reset": true, 00:06:14.256 "compare": false, 00:06:14.256 "compare_and_write": false, 00:06:14.256 "abort": true, 00:06:14.256 "nvme_admin": false, 00:06:14.256 "nvme_io": false 00:06:14.256 }, 00:06:14.256 "memory_domains": [ 00:06:14.256 { 00:06:14.256 "dma_device_id": "system", 00:06:14.256 "dma_device_type": 1 00:06:14.256 }, 00:06:14.256 { 00:06:14.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.256 "dma_device_type": 2 00:06:14.256 } 00:06:14.256 ], 00:06:14.256 "driver_specific": {} 00:06:14.256 } 00:06:14.256 ]' 00:06:14.256 05:01:51 -- rpc/rpc.sh@32 -- # jq length 00:06:14.256 05:01:51 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:14.256 05:01:51 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:14.256 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.256 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.256 05:01:51 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:14.256 05:01:51 -- rpc/rpc.sh@36 -- # jq length 00:06:14.515 05:01:51 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:14.515 00:06:14.515 real 0m0.118s 00:06:14.515 user 0m0.072s 00:06:14.515 sys 0m0.014s 00:06:14.515 05:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.515 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.515 ************************************ 00:06:14.515 END TEST rpc_plugins 00:06:14.515 ************************************ 00:06:14.515 05:01:51 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:14.515 05:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.515 05:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.515 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.515 ************************************ 00:06:14.515 START TEST rpc_trace_cmd_test 00:06:14.515 ************************************ 00:06:14.515 05:01:51 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:06:14.515 05:01:51 -- rpc/rpc.sh@40 -- # local info 00:06:14.515 05:01:51 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:14.515 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.515 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.515 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.515 05:01:51 -- rpc/rpc.sh@42 -- # info='{ 00:06:14.515 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1761789", 00:06:14.515 "tpoint_group_mask": "0x8", 00:06:14.515 "iscsi_conn": { 00:06:14.515 "mask": "0x2", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "scsi": { 00:06:14.515 "mask": "0x4", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "bdev": { 00:06:14.515 "mask": "0x8", 00:06:14.515 "tpoint_mask": "0xffffffffffffffff" 00:06:14.515 }, 00:06:14.515 "nvmf_rdma": { 00:06:14.515 "mask": "0x10", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "nvmf_tcp": { 00:06:14.515 "mask": "0x20", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "ftl": { 00:06:14.515 "mask": "0x40", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "blobfs": { 00:06:14.515 "mask": "0x80", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "dsa": { 00:06:14.515 "mask": "0x200", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "thread": { 00:06:14.515 "mask": "0x400", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "nvme_pcie": { 00:06:14.515 "mask": "0x800", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "iaa": { 00:06:14.515 "mask": "0x1000", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "nvme_tcp": { 00:06:14.515 "mask": "0x2000", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "bdev_nvme": { 00:06:14.515 "mask": "0x4000", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 }, 00:06:14.515 "sock": { 00:06:14.515 "mask": "0x8000", 00:06:14.515 "tpoint_mask": "0x0" 00:06:14.515 } 00:06:14.515 }' 00:06:14.515 05:01:51 -- rpc/rpc.sh@43 -- # jq length 00:06:14.515 05:01:51 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:14.515 05:01:51 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:14.515 05:01:51 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:14.515 05:01:51 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:14.772 05:01:51 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:14.772 05:01:51 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:14.772 05:01:51 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:14.772 05:01:51 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:14.772 05:01:51 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:14.772 00:06:14.772 real 0m0.194s 00:06:14.772 user 0m0.172s 00:06:14.772 sys 0m0.016s 00:06:14.772 05:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.772 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.772 ************************************ 00:06:14.772 END TEST rpc_trace_cmd_test 00:06:14.772 ************************************ 00:06:14.772 05:01:51 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:14.772 05:01:51 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:14.772 05:01:51 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:14.772 05:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.772 05:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.772 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.772 ************************************ 00:06:14.772 START TEST rpc_daemon_integrity 00:06:14.772 ************************************ 00:06:14.772 05:01:51 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:14.772 05:01:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:14.772 05:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.772 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.772 05:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.772 05:01:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:14.772 05:01:51 -- rpc/rpc.sh@13 -- # jq length 00:06:14.772 05:01:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:14.772 05:01:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:14.772 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.772 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.029 05:01:52 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:15.029 05:01:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.029 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.029 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.029 05:01:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.029 { 00:06:15.029 "name": "Malloc2", 00:06:15.029 "aliases": [ 00:06:15.029 "c241f3ea-598f-46ed-9a8c-4b42a374f8e5" 00:06:15.029 ], 00:06:15.029 "product_name": "Malloc disk", 00:06:15.029 "block_size": 512, 00:06:15.029 "num_blocks": 16384, 00:06:15.029 "uuid": "c241f3ea-598f-46ed-9a8c-4b42a374f8e5", 00:06:15.029 "assigned_rate_limits": { 00:06:15.029 "rw_ios_per_sec": 0, 00:06:15.029 "rw_mbytes_per_sec": 0, 00:06:15.029 "r_mbytes_per_sec": 0, 00:06:15.029 "w_mbytes_per_sec": 0 00:06:15.029 }, 00:06:15.029 "claimed": false, 00:06:15.029 "zoned": false, 00:06:15.029 "supported_io_types": { 00:06:15.029 "read": true, 00:06:15.029 "write": true, 00:06:15.029 "unmap": true, 00:06:15.029 "write_zeroes": true, 00:06:15.029 "flush": true, 00:06:15.029 "reset": true, 00:06:15.029 "compare": false, 00:06:15.029 "compare_and_write": false, 00:06:15.029 "abort": true, 00:06:15.029 "nvme_admin": false, 00:06:15.029 "nvme_io": false 00:06:15.029 }, 00:06:15.029 "memory_domains": [ 00:06:15.029 { 00:06:15.029 "dma_device_id": "system", 00:06:15.029 "dma_device_type": 1 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.029 "dma_device_type": 2 00:06:15.029 } 00:06:15.029 ], 00:06:15.029 "driver_specific": {} 00:06:15.030 } 00:06:15.030 ]' 00:06:15.030 05:01:52 -- rpc/rpc.sh@17 -- # jq length 00:06:15.030 05:01:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.030 05:01:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:15.030 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 [2024-04-24 05:01:52.095709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:15.030 [2024-04-24 05:01:52.095757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.030 [2024-04-24 05:01:52.095780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ac770 00:06:15.030 [2024-04-24 05:01:52.095794] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.030 [2024-04-24 05:01:52.097155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.030 [2024-04-24 05:01:52.097190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.030 Passthru0 00:06:15.030 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.030 05:01:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.030 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.030 05:01:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.030 { 00:06:15.030 "name": "Malloc2", 00:06:15.030 "aliases": [ 00:06:15.030 "c241f3ea-598f-46ed-9a8c-4b42a374f8e5" 00:06:15.030 ], 00:06:15.030 "product_name": "Malloc disk", 00:06:15.030 "block_size": 512, 00:06:15.030 "num_blocks": 16384, 00:06:15.030 "uuid": "c241f3ea-598f-46ed-9a8c-4b42a374f8e5", 00:06:15.030 "assigned_rate_limits": { 00:06:15.030 "rw_ios_per_sec": 0, 00:06:15.030 "rw_mbytes_per_sec": 0, 00:06:15.030 "r_mbytes_per_sec": 0, 00:06:15.030 "w_mbytes_per_sec": 0 00:06:15.030 }, 00:06:15.030 "claimed": true, 00:06:15.030 "claim_type": "exclusive_write", 00:06:15.030 "zoned": false, 00:06:15.030 "supported_io_types": { 00:06:15.030 "read": true, 00:06:15.030 "write": true, 00:06:15.030 "unmap": true, 00:06:15.030 "write_zeroes": true, 00:06:15.030 "flush": true, 00:06:15.030 "reset": true, 00:06:15.030 "compare": false, 00:06:15.030 "compare_and_write": false, 00:06:15.030 "abort": true, 00:06:15.030 "nvme_admin": false, 00:06:15.030 "nvme_io": false 00:06:15.030 }, 00:06:15.030 "memory_domains": [ 00:06:15.030 { 00:06:15.030 "dma_device_id": "system", 00:06:15.030 "dma_device_type": 1 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.030 "dma_device_type": 2 00:06:15.030 } 00:06:15.030 ], 00:06:15.030 "driver_specific": {} 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "name": "Passthru0", 00:06:15.030 "aliases": [ 00:06:15.030 "8c5cb00e-41bc-572f-8c9a-f40b92d5c276" 00:06:15.030 ], 00:06:15.030 "product_name": "passthru", 00:06:15.030 "block_size": 512, 00:06:15.030 "num_blocks": 16384, 00:06:15.030 "uuid": "8c5cb00e-41bc-572f-8c9a-f40b92d5c276", 00:06:15.030 "assigned_rate_limits": { 00:06:15.030 "rw_ios_per_sec": 0, 00:06:15.030 "rw_mbytes_per_sec": 0, 00:06:15.030 "r_mbytes_per_sec": 0, 00:06:15.030 "w_mbytes_per_sec": 0 00:06:15.030 }, 00:06:15.030 "claimed": false, 00:06:15.030 "zoned": false, 00:06:15.030 "supported_io_types": { 00:06:15.030 "read": true, 00:06:15.030 "write": true, 00:06:15.030 "unmap": true, 00:06:15.030 "write_zeroes": true, 00:06:15.030 "flush": true, 00:06:15.030 "reset": true, 00:06:15.030 "compare": false, 00:06:15.030 "compare_and_write": false, 00:06:15.030 "abort": true, 00:06:15.030 "nvme_admin": false, 00:06:15.030 "nvme_io": false 00:06:15.030 }, 00:06:15.030 "memory_domains": [ 00:06:15.030 { 00:06:15.030 "dma_device_id": "system", 00:06:15.030 "dma_device_type": 1 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.030 "dma_device_type": 2 00:06:15.030 } 00:06:15.030 ], 00:06:15.030 "driver_specific": { 00:06:15.030 "passthru": { 00:06:15.030 "name": "Passthru0", 00:06:15.030 "base_bdev_name": "Malloc2" 00:06:15.030 } 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ]' 00:06:15.030 05:01:52 -- rpc/rpc.sh@21 -- # jq length 00:06:15.030 05:01:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.030 05:01:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.030 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.030 05:01:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:15.030 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.030 05:01:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.030 05:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 05:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.030 05:01:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.030 05:01:52 -- rpc/rpc.sh@26 -- # jq length 00:06:15.030 05:01:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.030 00:06:15.030 real 0m0.229s 00:06:15.030 user 0m0.148s 00:06:15.030 sys 0m0.020s 00:06:15.030 05:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.030 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.030 ************************************ 00:06:15.030 END TEST rpc_daemon_integrity 00:06:15.030 ************************************ 00:06:15.030 05:01:52 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:15.030 05:01:52 -- rpc/rpc.sh@84 -- # killprocess 1761789 00:06:15.030 05:01:52 -- common/autotest_common.sh@936 -- # '[' -z 1761789 ']' 00:06:15.030 05:01:52 -- common/autotest_common.sh@940 -- # kill -0 1761789 00:06:15.030 05:01:52 -- common/autotest_common.sh@941 -- # uname 00:06:15.030 05:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.030 05:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1761789 00:06:15.030 05:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.030 05:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.030 05:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1761789' 00:06:15.030 killing process with pid 1761789 00:06:15.030 05:01:52 -- common/autotest_common.sh@955 -- # kill 1761789 00:06:15.030 05:01:52 -- common/autotest_common.sh@960 -- # wait 1761789 00:06:15.596 00:06:15.596 real 0m2.150s 00:06:15.596 user 0m2.725s 00:06:15.596 sys 0m0.702s 00:06:15.596 05:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.596 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.596 ************************************ 00:06:15.596 END TEST rpc 00:06:15.596 ************************************ 00:06:15.596 05:01:52 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:15.596 05:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.596 05:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.596 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.596 ************************************ 00:06:15.596 START TEST skip_rpc 00:06:15.596 ************************************ 00:06:15.596 05:01:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:15.596 * Looking for test storage... 00:06:15.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:15.596 05:01:52 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:15.596 05:01:52 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:15.596 05:01:52 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:15.596 05:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.596 05:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.596 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.854 ************************************ 00:06:15.854 START TEST skip_rpc 00:06:15.854 ************************************ 00:06:15.854 05:01:52 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:06:15.854 05:01:52 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1762274 00:06:15.854 05:01:52 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:15.854 05:01:52 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.854 05:01:52 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:15.854 [2024-04-24 05:01:52.973429] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:15.854 [2024-04-24 05:01:52.973492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762274 ] 00:06:15.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.854 [2024-04-24 05:01:53.004131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.854 [2024-04-24 05:01:53.034138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.854 [2024-04-24 05:01:53.124109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.125 05:01:57 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:21.125 05:01:57 -- common/autotest_common.sh@638 -- # local es=0 00:06:21.125 05:01:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:21.125 05:01:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:21.125 05:01:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.125 05:01:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:21.125 05:01:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.125 05:01:57 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:06:21.125 05:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.125 05:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:21.125 05:01:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:21.125 05:01:57 -- common/autotest_common.sh@641 -- # es=1 00:06:21.125 05:01:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:21.125 05:01:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:21.125 05:01:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:21.125 05:01:57 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:21.125 05:01:57 -- rpc/skip_rpc.sh@23 -- # killprocess 1762274 00:06:21.125 05:01:57 -- common/autotest_common.sh@936 -- # '[' -z 1762274 ']' 00:06:21.125 05:01:57 -- common/autotest_common.sh@940 -- # kill -0 1762274 00:06:21.125 05:01:57 -- common/autotest_common.sh@941 -- # uname 00:06:21.125 05:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.125 05:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1762274 00:06:21.125 05:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.125 05:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.125 05:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1762274' 00:06:21.125 killing process with pid 1762274 00:06:21.125 05:01:57 -- common/autotest_common.sh@955 -- # kill 1762274 00:06:21.125 05:01:57 -- common/autotest_common.sh@960 -- # wait 1762274 00:06:21.125 00:06:21.125 real 0m5.409s 00:06:21.125 user 0m5.087s 00:06:21.125 sys 0m0.327s 00:06:21.125 05:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.125 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.125 ************************************ 00:06:21.125 END TEST skip_rpc 00:06:21.125 ************************************ 00:06:21.125 05:01:58 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:21.125 05:01:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.125 05:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.125 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.383 ************************************ 00:06:21.383 START TEST skip_rpc_with_json 00:06:21.383 ************************************ 00:06:21.383 05:01:58 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:06:21.383 05:01:58 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:21.383 05:01:58 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1762966 00:06:21.383 05:01:58 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.383 05:01:58 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.383 05:01:58 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1762966 00:06:21.383 05:01:58 -- common/autotest_common.sh@817 -- # '[' -z 1762966 ']' 00:06:21.383 05:01:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.383 05:01:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:21.383 05:01:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.383 05:01:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:21.383 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.383 [2024-04-24 05:01:58.509296] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:21.383 [2024-04-24 05:01:58.509395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762966 ] 00:06:21.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.383 [2024-04-24 05:01:58.541641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.383 [2024-04-24 05:01:58.573901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.642 [2024-04-24 05:01:58.660794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.899 05:01:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.899 05:01:58 -- common/autotest_common.sh@850 -- # return 0 00:06:21.899 05:01:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:21.899 05:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.899 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.899 [2024-04-24 05:01:58.920588] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:21.899 request: 00:06:21.899 { 00:06:21.899 "trtype": "tcp", 00:06:21.899 "method": "nvmf_get_transports", 00:06:21.899 "req_id": 1 00:06:21.899 } 00:06:21.899 Got JSON-RPC error response 00:06:21.899 response: 00:06:21.899 { 00:06:21.899 "code": -19, 00:06:21.899 "message": "No such device" 00:06:21.899 } 00:06:21.899 05:01:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:21.899 05:01:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:21.899 05:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.899 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.899 [2024-04-24 05:01:58.928717] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.899 05:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.899 05:01:58 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:21.899 05:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.899 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.899 05:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.899 05:01:59 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:21.899 { 00:06:21.899 "subsystems": [ 00:06:21.899 { 00:06:21.899 "subsystem": "vfio_user_target", 00:06:21.899 "config": null 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "subsystem": "keyring", 00:06:21.899 "config": [] 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "subsystem": "iobuf", 00:06:21.899 "config": [ 00:06:21.899 { 00:06:21.899 "method": "iobuf_set_options", 00:06:21.899 "params": { 00:06:21.899 "small_pool_count": 8192, 00:06:21.899 "large_pool_count": 1024, 00:06:21.899 "small_bufsize": 8192, 00:06:21.899 "large_bufsize": 135168 00:06:21.899 } 00:06:21.899 } 00:06:21.899 ] 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "subsystem": "sock", 00:06:21.899 "config": [ 00:06:21.899 { 00:06:21.899 "method": "sock_impl_set_options", 00:06:21.899 "params": { 00:06:21.899 "impl_name": "posix", 00:06:21.899 "recv_buf_size": 2097152, 00:06:21.899 "send_buf_size": 2097152, 00:06:21.899 "enable_recv_pipe": true, 00:06:21.899 "enable_quickack": false, 00:06:21.899 "enable_placement_id": 0, 00:06:21.899 "enable_zerocopy_send_server": true, 00:06:21.899 "enable_zerocopy_send_client": false, 00:06:21.899 "zerocopy_threshold": 0, 00:06:21.899 "tls_version": 0, 00:06:21.899 "enable_ktls": false 00:06:21.899 } 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "method": "sock_impl_set_options", 00:06:21.899 "params": { 00:06:21.899 "impl_name": "ssl", 00:06:21.899 "recv_buf_size": 4096, 00:06:21.899 "send_buf_size": 4096, 00:06:21.899 "enable_recv_pipe": true, 00:06:21.899 "enable_quickack": false, 00:06:21.899 "enable_placement_id": 0, 00:06:21.899 "enable_zerocopy_send_server": true, 00:06:21.899 "enable_zerocopy_send_client": false, 00:06:21.899 "zerocopy_threshold": 0, 00:06:21.899 "tls_version": 0, 00:06:21.899 "enable_ktls": false 00:06:21.899 } 00:06:21.899 } 00:06:21.899 ] 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "subsystem": "vmd", 00:06:21.899 "config": [] 00:06:21.899 }, 00:06:21.899 { 00:06:21.899 "subsystem": "accel", 00:06:21.899 "config": [ 00:06:21.899 { 00:06:21.899 "method": "accel_set_options", 00:06:21.899 "params": { 00:06:21.900 "small_cache_size": 128, 00:06:21.900 "large_cache_size": 16, 00:06:21.900 "task_count": 2048, 00:06:21.900 "sequence_count": 2048, 00:06:21.900 "buf_count": 2048 00:06:21.900 } 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "bdev", 00:06:21.900 "config": [ 00:06:21.900 { 00:06:21.900 "method": "bdev_set_options", 00:06:21.900 "params": { 00:06:21.900 "bdev_io_pool_size": 65535, 00:06:21.900 "bdev_io_cache_size": 256, 00:06:21.900 "bdev_auto_examine": true, 00:06:21.900 "iobuf_small_cache_size": 128, 00:06:21.900 "iobuf_large_cache_size": 16 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "bdev_raid_set_options", 00:06:21.900 "params": { 00:06:21.900 "process_window_size_kb": 1024 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "bdev_iscsi_set_options", 00:06:21.900 "params": { 00:06:21.900 "timeout_sec": 30 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "bdev_nvme_set_options", 00:06:21.900 "params": { 00:06:21.900 "action_on_timeout": "none", 00:06:21.900 "timeout_us": 0, 00:06:21.900 "timeout_admin_us": 0, 00:06:21.900 "keep_alive_timeout_ms": 10000, 00:06:21.900 "arbitration_burst": 0, 00:06:21.900 "low_priority_weight": 0, 00:06:21.900 "medium_priority_weight": 0, 00:06:21.900 "high_priority_weight": 0, 00:06:21.900 "nvme_adminq_poll_period_us": 10000, 00:06:21.900 "nvme_ioq_poll_period_us": 0, 00:06:21.900 "io_queue_requests": 0, 00:06:21.900 "delay_cmd_submit": true, 00:06:21.900 "transport_retry_count": 4, 00:06:21.900 "bdev_retry_count": 3, 00:06:21.900 "transport_ack_timeout": 0, 00:06:21.900 "ctrlr_loss_timeout_sec": 0, 00:06:21.900 "reconnect_delay_sec": 0, 00:06:21.900 "fast_io_fail_timeout_sec": 0, 00:06:21.900 "disable_auto_failback": false, 00:06:21.900 "generate_uuids": false, 00:06:21.900 "transport_tos": 0, 00:06:21.900 "nvme_error_stat": false, 00:06:21.900 "rdma_srq_size": 0, 00:06:21.900 "io_path_stat": false, 00:06:21.900 "allow_accel_sequence": false, 00:06:21.900 "rdma_max_cq_size": 0, 00:06:21.900 "rdma_cm_event_timeout_ms": 0, 00:06:21.900 "dhchap_digests": [ 00:06:21.900 "sha256", 00:06:21.900 "sha384", 00:06:21.900 "sha512" 00:06:21.900 ], 00:06:21.900 "dhchap_dhgroups": [ 00:06:21.900 "null", 00:06:21.900 "ffdhe2048", 00:06:21.900 "ffdhe3072", 00:06:21.900 "ffdhe4096", 00:06:21.900 "ffdhe6144", 00:06:21.900 "ffdhe8192" 00:06:21.900 ] 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "bdev_nvme_set_hotplug", 00:06:21.900 "params": { 00:06:21.900 "period_us": 100000, 00:06:21.900 "enable": false 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "bdev_wait_for_examine" 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "scsi", 00:06:21.900 "config": null 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "scheduler", 00:06:21.900 "config": [ 00:06:21.900 { 00:06:21.900 "method": "framework_set_scheduler", 00:06:21.900 "params": { 00:06:21.900 "name": "static" 00:06:21.900 } 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "vhost_scsi", 00:06:21.900 "config": [] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "vhost_blk", 00:06:21.900 "config": [] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "ublk", 00:06:21.900 "config": [] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "nbd", 00:06:21.900 "config": [] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "nvmf", 00:06:21.900 "config": [ 00:06:21.900 { 00:06:21.900 "method": "nvmf_set_config", 00:06:21.900 "params": { 00:06:21.900 "discovery_filter": "match_any", 00:06:21.900 "admin_cmd_passthru": { 00:06:21.900 "identify_ctrlr": false 00:06:21.900 } 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "nvmf_set_max_subsystems", 00:06:21.900 "params": { 00:06:21.900 "max_subsystems": 1024 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "nvmf_set_crdt", 00:06:21.900 "params": { 00:06:21.900 "crdt1": 0, 00:06:21.900 "crdt2": 0, 00:06:21.900 "crdt3": 0 00:06:21.900 } 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "method": "nvmf_create_transport", 00:06:21.900 "params": { 00:06:21.900 "trtype": "TCP", 00:06:21.900 "max_queue_depth": 128, 00:06:21.900 "max_io_qpairs_per_ctrlr": 127, 00:06:21.900 "in_capsule_data_size": 4096, 00:06:21.900 "max_io_size": 131072, 00:06:21.900 "io_unit_size": 131072, 00:06:21.900 "max_aq_depth": 128, 00:06:21.900 "num_shared_buffers": 511, 00:06:21.900 "buf_cache_size": 4294967295, 00:06:21.900 "dif_insert_or_strip": false, 00:06:21.900 "zcopy": false, 00:06:21.900 "c2h_success": true, 00:06:21.900 "sock_priority": 0, 00:06:21.900 "abort_timeout_sec": 1, 00:06:21.900 "ack_timeout": 0, 00:06:21.900 "data_wr_pool_size": 0 00:06:21.900 } 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 }, 00:06:21.900 { 00:06:21.900 "subsystem": "iscsi", 00:06:21.900 "config": [ 00:06:21.900 { 00:06:21.900 "method": "iscsi_set_options", 00:06:21.900 "params": { 00:06:21.900 "node_base": "iqn.2016-06.io.spdk", 00:06:21.900 "max_sessions": 128, 00:06:21.900 "max_connections_per_session": 2, 00:06:21.900 "max_queue_depth": 64, 00:06:21.900 "default_time2wait": 2, 00:06:21.900 "default_time2retain": 20, 00:06:21.900 "first_burst_length": 8192, 00:06:21.900 "immediate_data": true, 00:06:21.900 "allow_duplicated_isid": false, 00:06:21.900 "error_recovery_level": 0, 00:06:21.900 "nop_timeout": 60, 00:06:21.900 "nop_in_interval": 30, 00:06:21.900 "disable_chap": false, 00:06:21.900 "require_chap": false, 00:06:21.900 "mutual_chap": false, 00:06:21.900 "chap_group": 0, 00:06:21.900 "max_large_datain_per_connection": 64, 00:06:21.900 "max_r2t_per_connection": 4, 00:06:21.900 "pdu_pool_size": 36864, 00:06:21.900 "immediate_data_pool_size": 16384, 00:06:21.900 "data_out_pool_size": 2048 00:06:21.900 } 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 } 00:06:21.900 ] 00:06:21.900 } 00:06:21.900 05:01:59 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:21.900 05:01:59 -- rpc/skip_rpc.sh@40 -- # killprocess 1762966 00:06:21.900 05:01:59 -- common/autotest_common.sh@936 -- # '[' -z 1762966 ']' 00:06:21.900 05:01:59 -- common/autotest_common.sh@940 -- # kill -0 1762966 00:06:21.900 05:01:59 -- common/autotest_common.sh@941 -- # uname 00:06:21.900 05:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.900 05:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1762966 00:06:21.900 05:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.900 05:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.900 05:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1762966' 00:06:21.900 killing process with pid 1762966 00:06:21.900 05:01:59 -- common/autotest_common.sh@955 -- # kill 1762966 00:06:21.900 05:01:59 -- common/autotest_common.sh@960 -- # wait 1762966 00:06:22.465 05:01:59 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1763108 00:06:22.465 05:01:59 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.465 05:01:59 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:27.725 05:02:04 -- rpc/skip_rpc.sh@50 -- # killprocess 1763108 00:06:27.725 05:02:04 -- common/autotest_common.sh@936 -- # '[' -z 1763108 ']' 00:06:27.725 05:02:04 -- common/autotest_common.sh@940 -- # kill -0 1763108 00:06:27.725 05:02:04 -- common/autotest_common.sh@941 -- # uname 00:06:27.725 05:02:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.725 05:02:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1763108 00:06:27.725 05:02:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.725 05:02:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.725 05:02:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1763108' 00:06:27.725 killing process with pid 1763108 00:06:27.725 05:02:04 -- common/autotest_common.sh@955 -- # kill 1763108 00:06:27.725 05:02:04 -- common/autotest_common.sh@960 -- # wait 1763108 00:06:27.725 05:02:04 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:27.725 05:02:04 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:27.725 00:06:27.726 real 0m6.460s 00:06:27.726 user 0m6.029s 00:06:27.726 sys 0m0.693s 00:06:27.726 05:02:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.726 05:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.726 ************************************ 00:06:27.726 END TEST skip_rpc_with_json 00:06:27.726 ************************************ 00:06:27.726 05:02:04 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:27.726 05:02:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.726 05:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.726 05:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.984 ************************************ 00:06:27.984 START TEST skip_rpc_with_delay 00:06:27.984 ************************************ 00:06:27.985 05:02:05 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.985 05:02:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:27.985 05:02:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.985 05:02:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.985 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.985 05:02:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.985 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.985 05:02:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.985 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.985 05:02:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.985 05:02:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:27.985 05:02:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.985 [2024-04-24 05:02:05.089213] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:27.985 [2024-04-24 05:02:05.089306] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:27.985 05:02:05 -- common/autotest_common.sh@641 -- # es=1 00:06:27.985 05:02:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:27.985 05:02:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:27.985 05:02:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:27.985 00:06:27.985 real 0m0.065s 00:06:27.985 user 0m0.046s 00:06:27.985 sys 0m0.019s 00:06:27.985 05:02:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.985 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:27.985 ************************************ 00:06:27.985 END TEST skip_rpc_with_delay 00:06:27.985 ************************************ 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@77 -- # uname 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:27.985 05:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.985 05:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.985 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:27.985 ************************************ 00:06:27.985 START TEST exit_on_failed_rpc_init 00:06:27.985 ************************************ 00:06:27.985 05:02:05 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1763840 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.985 05:02:05 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1763840 00:06:27.985 05:02:05 -- common/autotest_common.sh@817 -- # '[' -z 1763840 ']' 00:06:27.985 05:02:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.985 05:02:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.985 05:02:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.985 05:02:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.985 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.243 [2024-04-24 05:02:05.279298] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:28.243 [2024-04-24 05:02:05.279386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763840 ] 00:06:28.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.243 [2024-04-24 05:02:05.311281] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.243 [2024-04-24 05:02:05.337147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.243 [2024-04-24 05:02:05.422921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.502 05:02:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.502 05:02:05 -- common/autotest_common.sh@850 -- # return 0 00:06:28.502 05:02:05 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.502 05:02:05 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:28.502 05:02:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:28.502 05:02:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:28.502 05:02:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.502 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.502 05:02:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.502 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.502 05:02:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.502 05:02:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.502 05:02:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.502 05:02:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:28.502 05:02:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:28.502 [2024-04-24 05:02:05.717906] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:28.502 [2024-04-24 05:02:05.718012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763850 ] 00:06:28.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.502 [2024-04-24 05:02:05.749158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.761 [2024-04-24 05:02:05.779125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.761 [2024-04-24 05:02:05.871796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.761 [2024-04-24 05:02:05.871904] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:28.761 [2024-04-24 05:02:05.871922] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:28.761 [2024-04-24 05:02:05.871934] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.761 05:02:05 -- common/autotest_common.sh@641 -- # es=234 00:06:28.761 05:02:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:28.761 05:02:05 -- common/autotest_common.sh@650 -- # es=106 00:06:28.761 05:02:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:28.761 05:02:05 -- common/autotest_common.sh@658 -- # es=1 00:06:28.761 05:02:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:28.761 05:02:05 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:28.761 05:02:05 -- rpc/skip_rpc.sh@70 -- # killprocess 1763840 00:06:28.761 05:02:05 -- common/autotest_common.sh@936 -- # '[' -z 1763840 ']' 00:06:28.761 05:02:05 -- common/autotest_common.sh@940 -- # kill -0 1763840 00:06:28.761 05:02:05 -- common/autotest_common.sh@941 -- # uname 00:06:28.761 05:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.761 05:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1763840 00:06:28.761 05:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.761 05:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.761 05:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1763840' 00:06:28.761 killing process with pid 1763840 00:06:28.761 05:02:05 -- common/autotest_common.sh@955 -- # kill 1763840 00:06:28.761 05:02:05 -- common/autotest_common.sh@960 -- # wait 1763840 00:06:29.329 00:06:29.329 real 0m1.161s 00:06:29.329 user 0m1.252s 00:06:29.329 sys 0m0.450s 00:06:29.329 05:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.329 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.329 ************************************ 00:06:29.329 END TEST exit_on_failed_rpc_init 00:06:29.329 ************************************ 00:06:29.329 05:02:06 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:29.329 00:06:29.329 real 0m13.634s 00:06:29.329 user 0m12.603s 00:06:29.329 sys 0m1.809s 00:06:29.329 05:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.329 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.329 ************************************ 00:06:29.329 END TEST skip_rpc 00:06:29.329 ************************************ 00:06:29.329 05:02:06 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.329 05:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.329 05:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.329 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.329 ************************************ 00:06:29.329 START TEST rpc_client 00:06:29.329 ************************************ 00:06:29.329 05:02:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.329 * Looking for test storage... 00:06:29.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:29.329 05:02:06 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:29.596 OK 00:06:29.596 05:02:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:29.596 00:06:29.596 real 0m0.068s 00:06:29.596 user 0m0.028s 00:06:29.596 sys 0m0.045s 00:06:29.596 05:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.596 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 ************************************ 00:06:29.596 END TEST rpc_client 00:06:29.596 ************************************ 00:06:29.596 05:02:06 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:29.596 05:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.596 05:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.596 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 ************************************ 00:06:29.596 START TEST json_config 00:06:29.596 ************************************ 00:06:29.596 05:02:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:29.596 05:02:06 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.596 05:02:06 -- nvmf/common.sh@7 -- # uname -s 00:06:29.596 05:02:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.596 05:02:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.596 05:02:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.596 05:02:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.596 05:02:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.596 05:02:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.596 05:02:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.596 05:02:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.596 05:02:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.596 05:02:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.596 05:02:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.596 05:02:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.596 05:02:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.596 05:02:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.596 05:02:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.596 05:02:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.596 05:02:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.596 05:02:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.596 05:02:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.596 05:02:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.596 05:02:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.596 05:02:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.596 05:02:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.596 05:02:06 -- paths/export.sh@5 -- # export PATH 00:06:29.596 05:02:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.596 05:02:06 -- nvmf/common.sh@47 -- # : 0 00:06:29.596 05:02:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.596 05:02:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.596 05:02:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.596 05:02:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.596 05:02:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.596 05:02:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.596 05:02:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.596 05:02:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.596 05:02:06 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:29.596 05:02:06 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:29.596 05:02:06 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:29.596 05:02:06 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:29.596 05:02:06 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:29.596 05:02:06 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:29.596 05:02:06 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:29.596 05:02:06 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:29.596 05:02:06 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:29.596 05:02:06 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:29.596 05:02:06 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:29.596 05:02:06 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:29.596 05:02:06 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:29.596 05:02:06 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:29.596 05:02:06 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:29.596 05:02:06 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:29.596 INFO: JSON configuration test init 00:06:29.596 05:02:06 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:29.596 05:02:06 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:29.596 05:02:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:29.596 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 05:02:06 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:29.596 05:02:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:29.596 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 05:02:06 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:29.596 05:02:06 -- json_config/common.sh@9 -- # local app=target 00:06:29.596 05:02:06 -- json_config/common.sh@10 -- # shift 00:06:29.596 05:02:06 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.596 05:02:06 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.596 05:02:06 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.596 05:02:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.596 05:02:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.596 05:02:06 -- json_config/common.sh@22 -- # app_pid["$app"]=1764106 00:06:29.596 05:02:06 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:29.596 05:02:06 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.597 Waiting for target to run... 00:06:29.597 05:02:06 -- json_config/common.sh@25 -- # waitforlisten 1764106 /var/tmp/spdk_tgt.sock 00:06:29.597 05:02:06 -- common/autotest_common.sh@817 -- # '[' -z 1764106 ']' 00:06:29.597 05:02:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.597 05:02:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.597 05:02:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.597 05:02:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.597 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.597 [2024-04-24 05:02:06.819421] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:29.597 [2024-04-24 05:02:06.819501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764106 ] 00:06:29.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.165 [2024-04-24 05:02:07.141409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.165 [2024-04-24 05:02:07.174512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.165 [2024-04-24 05:02:07.235785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.731 05:02:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.731 05:02:07 -- common/autotest_common.sh@850 -- # return 0 00:06:30.731 05:02:07 -- json_config/common.sh@26 -- # echo '' 00:06:30.731 00:06:30.731 05:02:07 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:30.731 05:02:07 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:30.731 05:02:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:30.731 05:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.731 05:02:07 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:30.731 05:02:07 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:30.731 05:02:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:30.731 05:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.731 05:02:07 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:30.731 05:02:07 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:30.731 05:02:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:34.015 05:02:10 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:34.015 05:02:10 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:34.015 05:02:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:34.015 05:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:34.015 05:02:10 -- json_config/json_config.sh@45 -- # local ret=0 00:06:34.015 05:02:10 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:34.015 05:02:10 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:34.015 05:02:10 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:34.015 05:02:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:34.015 05:02:10 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:34.015 05:02:11 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:34.015 05:02:11 -- json_config/json_config.sh@48 -- # local get_types 00:06:34.015 05:02:11 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:34.015 05:02:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:34.015 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.015 05:02:11 -- json_config/json_config.sh@55 -- # return 0 00:06:34.015 05:02:11 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:34.015 05:02:11 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:34.015 05:02:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:34.015 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.015 05:02:11 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:34.015 05:02:11 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:34.015 05:02:11 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.015 05:02:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.273 MallocForNvmf0 00:06:34.273 05:02:11 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.273 05:02:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.530 MallocForNvmf1 00:06:34.530 05:02:11 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:34.530 05:02:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:34.788 [2024-04-24 05:02:11.925323] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.788 05:02:11 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:34.788 05:02:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.046 05:02:12 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.046 05:02:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.306 05:02:12 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.306 05:02:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.564 05:02:12 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:35.564 05:02:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:35.821 [2024-04-24 05:02:12.872404] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:35.821 05:02:12 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:35.821 05:02:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:35.821 05:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:35.821 05:02:12 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:35.821 05:02:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:35.821 05:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:35.821 05:02:12 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:35.821 05:02:12 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:35.821 05:02:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:36.079 MallocBdevForConfigChangeCheck 00:06:36.079 05:02:13 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:36.079 05:02:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:36.079 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.079 05:02:13 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:36.079 05:02:13 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.336 05:02:13 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:36.337 INFO: shutting down applications... 00:06:36.337 05:02:13 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:36.337 05:02:13 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:36.337 05:02:13 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:36.337 05:02:13 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:38.232 Calling clear_iscsi_subsystem 00:06:38.232 Calling clear_nvmf_subsystem 00:06:38.232 Calling clear_nbd_subsystem 00:06:38.232 Calling clear_ublk_subsystem 00:06:38.232 Calling clear_vhost_blk_subsystem 00:06:38.232 Calling clear_vhost_scsi_subsystem 00:06:38.232 Calling clear_bdev_subsystem 00:06:38.232 05:02:15 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:38.232 05:02:15 -- json_config/json_config.sh@343 -- # count=100 00:06:38.232 05:02:15 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:38.232 05:02:15 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.232 05:02:15 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:38.232 05:02:15 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:38.490 05:02:15 -- json_config/json_config.sh@345 -- # break 00:06:38.490 05:02:15 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:38.490 05:02:15 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:38.490 05:02:15 -- json_config/common.sh@31 -- # local app=target 00:06:38.490 05:02:15 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.490 05:02:15 -- json_config/common.sh@35 -- # [[ -n 1764106 ]] 00:06:38.490 05:02:15 -- json_config/common.sh@38 -- # kill -SIGINT 1764106 00:06:38.490 05:02:15 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.490 05:02:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.490 05:02:15 -- json_config/common.sh@41 -- # kill -0 1764106 00:06:38.490 05:02:15 -- json_config/common.sh@45 -- # sleep 0.5 00:06:39.056 05:02:16 -- json_config/common.sh@40 -- # (( i++ )) 00:06:39.056 05:02:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.056 05:02:16 -- json_config/common.sh@41 -- # kill -0 1764106 00:06:39.056 05:02:16 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:39.057 05:02:16 -- json_config/common.sh@43 -- # break 00:06:39.057 05:02:16 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:39.057 05:02:16 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:39.057 SPDK target shutdown done 00:06:39.057 05:02:16 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:39.057 INFO: relaunching applications... 00:06:39.057 05:02:16 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:39.057 05:02:16 -- json_config/common.sh@9 -- # local app=target 00:06:39.057 05:02:16 -- json_config/common.sh@10 -- # shift 00:06:39.057 05:02:16 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:39.057 05:02:16 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:39.057 05:02:16 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:39.057 05:02:16 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.057 05:02:16 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.057 05:02:16 -- json_config/common.sh@22 -- # app_pid["$app"]=1765336 00:06:39.057 05:02:16 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:39.057 05:02:16 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:39.057 Waiting for target to run... 00:06:39.057 05:02:16 -- json_config/common.sh@25 -- # waitforlisten 1765336 /var/tmp/spdk_tgt.sock 00:06:39.057 05:02:16 -- common/autotest_common.sh@817 -- # '[' -z 1765336 ']' 00:06:39.057 05:02:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:39.057 05:02:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:39.057 05:02:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:39.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:39.057 05:02:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:39.057 05:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.057 [2024-04-24 05:02:16.159773] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:39.057 [2024-04-24 05:02:16.159871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765336 ] 00:06:39.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.623 [2024-04-24 05:02:16.645754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.623 [2024-04-24 05:02:16.679230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.623 [2024-04-24 05:02:16.759037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.915 [2024-04-24 05:02:19.782477] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.916 [2024-04-24 05:02:19.814925] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:42.916 05:02:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.916 05:02:19 -- common/autotest_common.sh@850 -- # return 0 00:06:42.916 05:02:19 -- json_config/common.sh@26 -- # echo '' 00:06:42.916 00:06:42.916 05:02:19 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:42.916 05:02:19 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:42.916 INFO: Checking if target configuration is the same... 00:06:42.916 05:02:19 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.916 05:02:19 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:42.916 05:02:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.916 + '[' 2 -ne 2 ']' 00:06:42.916 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:42.916 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:42.916 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.916 +++ basename /dev/fd/62 00:06:42.916 ++ mktemp /tmp/62.XXX 00:06:42.916 + tmp_file_1=/tmp/62.pgv 00:06:42.916 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.916 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.916 + tmp_file_2=/tmp/spdk_tgt_config.json.Gdh 00:06:42.916 + ret=0 00:06:42.916 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.178 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.178 + diff -u /tmp/62.pgv /tmp/spdk_tgt_config.json.Gdh 00:06:43.178 + echo 'INFO: JSON config files are the same' 00:06:43.179 INFO: JSON config files are the same 00:06:43.179 + rm /tmp/62.pgv /tmp/spdk_tgt_config.json.Gdh 00:06:43.179 + exit 0 00:06:43.179 05:02:20 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:43.179 05:02:20 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:43.179 INFO: changing configuration and checking if this can be detected... 00:06:43.179 05:02:20 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:43.179 05:02:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:43.447 05:02:20 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.447 05:02:20 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:43.447 05:02:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.447 + '[' 2 -ne 2 ']' 00:06:43.447 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:43.447 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:43.447 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:43.447 +++ basename /dev/fd/62 00:06:43.447 ++ mktemp /tmp/62.XXX 00:06:43.447 + tmp_file_1=/tmp/62.Bux 00:06:43.447 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.447 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:43.447 + tmp_file_2=/tmp/spdk_tgt_config.json.6Vg 00:06:43.447 + ret=0 00:06:43.447 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.705 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.963 + diff -u /tmp/62.Bux /tmp/spdk_tgt_config.json.6Vg 00:06:43.963 + ret=1 00:06:43.963 + echo '=== Start of file: /tmp/62.Bux ===' 00:06:43.963 + cat /tmp/62.Bux 00:06:43.963 + echo '=== End of file: /tmp/62.Bux ===' 00:06:43.963 + echo '' 00:06:43.963 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6Vg ===' 00:06:43.963 + cat /tmp/spdk_tgt_config.json.6Vg 00:06:43.963 + echo '=== End of file: /tmp/spdk_tgt_config.json.6Vg ===' 00:06:43.963 + echo '' 00:06:43.963 + rm /tmp/62.Bux /tmp/spdk_tgt_config.json.6Vg 00:06:43.963 + exit 1 00:06:43.963 05:02:21 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:43.963 INFO: configuration change detected. 00:06:43.963 05:02:21 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:43.963 05:02:21 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:43.963 05:02:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:43.963 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.963 05:02:21 -- json_config/json_config.sh@307 -- # local ret=0 00:06:43.963 05:02:21 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:43.963 05:02:21 -- json_config/json_config.sh@317 -- # [[ -n 1765336 ]] 00:06:43.963 05:02:21 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:43.963 05:02:21 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:43.963 05:02:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:43.963 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.963 05:02:21 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:43.963 05:02:21 -- json_config/json_config.sh@193 -- # uname -s 00:06:43.963 05:02:21 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:43.963 05:02:21 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:43.963 05:02:21 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:43.963 05:02:21 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.963 05:02:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:43.963 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.963 05:02:21 -- json_config/json_config.sh@323 -- # killprocess 1765336 00:06:43.963 05:02:21 -- common/autotest_common.sh@936 -- # '[' -z 1765336 ']' 00:06:43.963 05:02:21 -- common/autotest_common.sh@940 -- # kill -0 1765336 00:06:43.963 05:02:21 -- common/autotest_common.sh@941 -- # uname 00:06:43.963 05:02:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.963 05:02:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1765336 00:06:43.963 05:02:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.963 05:02:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.963 05:02:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1765336' 00:06:43.963 killing process with pid 1765336 00:06:43.963 05:02:21 -- common/autotest_common.sh@955 -- # kill 1765336 00:06:43.963 05:02:21 -- common/autotest_common.sh@960 -- # wait 1765336 00:06:45.863 05:02:22 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.863 05:02:22 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:45.863 05:02:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.863 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.863 05:02:22 -- json_config/json_config.sh@328 -- # return 0 00:06:45.863 05:02:22 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:45.863 INFO: Success 00:06:45.863 00:06:45.863 real 0m15.990s 00:06:45.863 user 0m17.782s 00:06:45.863 sys 0m2.000s 00:06:45.863 05:02:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.863 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.863 ************************************ 00:06:45.863 END TEST json_config 00:06:45.863 ************************************ 00:06:45.864 05:02:22 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:45.864 05:02:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.864 05:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.864 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.864 ************************************ 00:06:45.864 START TEST json_config_extra_key 00:06:45.864 ************************************ 00:06:45.864 05:02:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.864 05:02:22 -- nvmf/common.sh@7 -- # uname -s 00:06:45.864 05:02:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.864 05:02:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.864 05:02:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.864 05:02:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.864 05:02:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.864 05:02:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.864 05:02:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.864 05:02:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.864 05:02:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.864 05:02:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.864 05:02:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.864 05:02:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.864 05:02:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.864 05:02:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.864 05:02:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:45.864 05:02:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.864 05:02:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.864 05:02:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.864 05:02:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.864 05:02:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.864 05:02:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.864 05:02:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.864 05:02:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.864 05:02:22 -- paths/export.sh@5 -- # export PATH 00:06:45.864 05:02:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.864 05:02:22 -- nvmf/common.sh@47 -- # : 0 00:06:45.864 05:02:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.864 05:02:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.864 05:02:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.864 05:02:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.864 05:02:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.864 05:02:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.864 05:02:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.864 05:02:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:45.864 INFO: launching applications... 00:06:45.864 05:02:22 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:45.864 05:02:22 -- json_config/common.sh@9 -- # local app=target 00:06:45.864 05:02:22 -- json_config/common.sh@10 -- # shift 00:06:45.864 05:02:22 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.864 05:02:22 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.864 05:02:22 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.864 05:02:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.864 05:02:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.864 05:02:22 -- json_config/common.sh@22 -- # app_pid["$app"]=1766265 00:06:45.864 05:02:22 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:45.864 05:02:22 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.864 Waiting for target to run... 00:06:45.864 05:02:22 -- json_config/common.sh@25 -- # waitforlisten 1766265 /var/tmp/spdk_tgt.sock 00:06:45.864 05:02:22 -- common/autotest_common.sh@817 -- # '[' -z 1766265 ']' 00:06:45.864 05:02:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.864 05:02:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.864 05:02:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.864 05:02:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.864 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.864 [2024-04-24 05:02:22.928112] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:45.864 [2024-04-24 05:02:22.928211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766265 ] 00:06:45.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.431 [2024-04-24 05:02:23.398022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.431 [2024-04-24 05:02:23.431350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.431 [2024-04-24 05:02:23.511178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.688 05:02:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.688 05:02:23 -- common/autotest_common.sh@850 -- # return 0 00:06:46.689 05:02:23 -- json_config/common.sh@26 -- # echo '' 00:06:46.689 00:06:46.689 05:02:23 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:46.689 INFO: shutting down applications... 00:06:46.689 05:02:23 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:46.689 05:02:23 -- json_config/common.sh@31 -- # local app=target 00:06:46.689 05:02:23 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.689 05:02:23 -- json_config/common.sh@35 -- # [[ -n 1766265 ]] 00:06:46.689 05:02:23 -- json_config/common.sh@38 -- # kill -SIGINT 1766265 00:06:46.689 05:02:23 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.689 05:02:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.689 05:02:23 -- json_config/common.sh@41 -- # kill -0 1766265 00:06:46.689 05:02:23 -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.255 05:02:24 -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.255 05:02:24 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.255 05:02:24 -- json_config/common.sh@41 -- # kill -0 1766265 00:06:47.255 05:02:24 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.255 05:02:24 -- json_config/common.sh@43 -- # break 00:06:47.255 05:02:24 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.255 05:02:24 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.255 SPDK target shutdown done 00:06:47.255 05:02:24 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:47.255 Success 00:06:47.255 00:06:47.255 real 0m1.540s 00:06:47.255 user 0m1.314s 00:06:47.255 sys 0m0.601s 00:06:47.255 05:02:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.255 05:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.255 ************************************ 00:06:47.255 END TEST json_config_extra_key 00:06:47.255 ************************************ 00:06:47.255 05:02:24 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.255 05:02:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.255 05:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.255 05:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.255 ************************************ 00:06:47.255 START TEST alias_rpc 00:06:47.255 ************************************ 00:06:47.255 05:02:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.514 * Looking for test storage... 00:06:47.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:47.514 05:02:24 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.514 05:02:24 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1766539 00:06:47.514 05:02:24 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:47.514 05:02:24 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1766539 00:06:47.514 05:02:24 -- common/autotest_common.sh@817 -- # '[' -z 1766539 ']' 00:06:47.514 05:02:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.514 05:02:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:47.514 05:02:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.514 05:02:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:47.514 05:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.514 [2024-04-24 05:02:24.583593] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:47.514 [2024-04-24 05:02:24.583693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766539 ] 00:06:47.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.514 [2024-04-24 05:02:24.613991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.514 [2024-04-24 05:02:24.643928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.514 [2024-04-24 05:02:24.731784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.772 05:02:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:47.772 05:02:24 -- common/autotest_common.sh@850 -- # return 0 00:06:47.772 05:02:24 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:48.030 05:02:25 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1766539 00:06:48.030 05:02:25 -- common/autotest_common.sh@936 -- # '[' -z 1766539 ']' 00:06:48.030 05:02:25 -- common/autotest_common.sh@940 -- # kill -0 1766539 00:06:48.030 05:02:25 -- common/autotest_common.sh@941 -- # uname 00:06:48.030 05:02:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.289 05:02:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1766539 00:06:48.289 05:02:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.289 05:02:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.289 05:02:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1766539' 00:06:48.289 killing process with pid 1766539 00:06:48.289 05:02:25 -- common/autotest_common.sh@955 -- # kill 1766539 00:06:48.289 05:02:25 -- common/autotest_common.sh@960 -- # wait 1766539 00:06:48.548 00:06:48.548 real 0m1.237s 00:06:48.548 user 0m1.334s 00:06:48.548 sys 0m0.426s 00:06:48.548 05:02:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.548 05:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.548 ************************************ 00:06:48.548 END TEST alias_rpc 00:06:48.548 ************************************ 00:06:48.548 05:02:25 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:48.548 05:02:25 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:48.548 05:02:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.548 05:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.548 05:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.806 ************************************ 00:06:48.806 START TEST spdkcli_tcp 00:06:48.806 ************************************ 00:06:48.806 05:02:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:48.806 * Looking for test storage... 00:06:48.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:48.806 05:02:25 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:48.806 05:02:25 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:48.806 05:02:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:48.806 05:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1766736 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:48.806 05:02:25 -- spdkcli/tcp.sh@27 -- # waitforlisten 1766736 00:06:48.806 05:02:25 -- common/autotest_common.sh@817 -- # '[' -z 1766736 ']' 00:06:48.806 05:02:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.806 05:02:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:48.806 05:02:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.806 05:02:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:48.806 05:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.806 [2024-04-24 05:02:25.944922] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:48.806 [2024-04-24 05:02:25.945016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766736 ] 00:06:48.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.806 [2024-04-24 05:02:25.976671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.806 [2024-04-24 05:02:26.002545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.064 [2024-04-24 05:02:26.087445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.064 [2024-04-24 05:02:26.087449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.322 05:02:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.322 05:02:26 -- common/autotest_common.sh@850 -- # return 0 00:06:49.322 05:02:26 -- spdkcli/tcp.sh@31 -- # socat_pid=1766836 00:06:49.322 05:02:26 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:49.322 05:02:26 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:49.322 [ 00:06:49.322 "bdev_malloc_delete", 00:06:49.322 "bdev_malloc_create", 00:06:49.322 "bdev_null_resize", 00:06:49.322 "bdev_null_delete", 00:06:49.322 "bdev_null_create", 00:06:49.322 "bdev_nvme_cuse_unregister", 00:06:49.322 "bdev_nvme_cuse_register", 00:06:49.322 "bdev_opal_new_user", 00:06:49.322 "bdev_opal_set_lock_state", 00:06:49.322 "bdev_opal_delete", 00:06:49.322 "bdev_opal_get_info", 00:06:49.322 "bdev_opal_create", 00:06:49.322 "bdev_nvme_opal_revert", 00:06:49.322 "bdev_nvme_opal_init", 00:06:49.322 "bdev_nvme_send_cmd", 00:06:49.322 "bdev_nvme_get_path_iostat", 00:06:49.322 "bdev_nvme_get_mdns_discovery_info", 00:06:49.322 "bdev_nvme_stop_mdns_discovery", 00:06:49.322 "bdev_nvme_start_mdns_discovery", 00:06:49.322 "bdev_nvme_set_multipath_policy", 00:06:49.322 "bdev_nvme_set_preferred_path", 00:06:49.322 "bdev_nvme_get_io_paths", 00:06:49.322 "bdev_nvme_remove_error_injection", 00:06:49.322 "bdev_nvme_add_error_injection", 00:06:49.322 "bdev_nvme_get_discovery_info", 00:06:49.322 "bdev_nvme_stop_discovery", 00:06:49.322 "bdev_nvme_start_discovery", 00:06:49.322 "bdev_nvme_get_controller_health_info", 00:06:49.322 "bdev_nvme_disable_controller", 00:06:49.322 "bdev_nvme_enable_controller", 00:06:49.322 "bdev_nvme_reset_controller", 00:06:49.322 "bdev_nvme_get_transport_statistics", 00:06:49.322 "bdev_nvme_apply_firmware", 00:06:49.322 "bdev_nvme_detach_controller", 00:06:49.323 "bdev_nvme_get_controllers", 00:06:49.323 "bdev_nvme_attach_controller", 00:06:49.323 "bdev_nvme_set_hotplug", 00:06:49.323 "bdev_nvme_set_options", 00:06:49.323 "bdev_passthru_delete", 00:06:49.323 "bdev_passthru_create", 00:06:49.323 "bdev_lvol_grow_lvstore", 00:06:49.323 "bdev_lvol_get_lvols", 00:06:49.323 "bdev_lvol_get_lvstores", 00:06:49.323 "bdev_lvol_delete", 00:06:49.323 "bdev_lvol_set_read_only", 00:06:49.323 "bdev_lvol_resize", 00:06:49.323 "bdev_lvol_decouple_parent", 00:06:49.323 "bdev_lvol_inflate", 00:06:49.323 "bdev_lvol_rename", 00:06:49.323 "bdev_lvol_clone_bdev", 00:06:49.323 "bdev_lvol_clone", 00:06:49.323 "bdev_lvol_snapshot", 00:06:49.323 "bdev_lvol_create", 00:06:49.323 "bdev_lvol_delete_lvstore", 00:06:49.323 "bdev_lvol_rename_lvstore", 00:06:49.323 "bdev_lvol_create_lvstore", 00:06:49.323 "bdev_raid_set_options", 00:06:49.323 "bdev_raid_remove_base_bdev", 00:06:49.323 "bdev_raid_add_base_bdev", 00:06:49.323 "bdev_raid_delete", 00:06:49.323 "bdev_raid_create", 00:06:49.323 "bdev_raid_get_bdevs", 00:06:49.323 "bdev_error_inject_error", 00:06:49.323 "bdev_error_delete", 00:06:49.323 "bdev_error_create", 00:06:49.323 "bdev_split_delete", 00:06:49.323 "bdev_split_create", 00:06:49.323 "bdev_delay_delete", 00:06:49.323 "bdev_delay_create", 00:06:49.323 "bdev_delay_update_latency", 00:06:49.323 "bdev_zone_block_delete", 00:06:49.323 "bdev_zone_block_create", 00:06:49.323 "blobfs_create", 00:06:49.323 "blobfs_detect", 00:06:49.323 "blobfs_set_cache_size", 00:06:49.323 "bdev_aio_delete", 00:06:49.323 "bdev_aio_rescan", 00:06:49.323 "bdev_aio_create", 00:06:49.323 "bdev_ftl_set_property", 00:06:49.323 "bdev_ftl_get_properties", 00:06:49.323 "bdev_ftl_get_stats", 00:06:49.323 "bdev_ftl_unmap", 00:06:49.323 "bdev_ftl_unload", 00:06:49.323 "bdev_ftl_delete", 00:06:49.323 "bdev_ftl_load", 00:06:49.323 "bdev_ftl_create", 00:06:49.323 "bdev_virtio_attach_controller", 00:06:49.323 "bdev_virtio_scsi_get_devices", 00:06:49.323 "bdev_virtio_detach_controller", 00:06:49.323 "bdev_virtio_blk_set_hotplug", 00:06:49.323 "bdev_iscsi_delete", 00:06:49.323 "bdev_iscsi_create", 00:06:49.323 "bdev_iscsi_set_options", 00:06:49.323 "accel_error_inject_error", 00:06:49.323 "ioat_scan_accel_module", 00:06:49.323 "dsa_scan_accel_module", 00:06:49.323 "iaa_scan_accel_module", 00:06:49.323 "vfu_virtio_create_scsi_endpoint", 00:06:49.323 "vfu_virtio_scsi_remove_target", 00:06:49.323 "vfu_virtio_scsi_add_target", 00:06:49.323 "vfu_virtio_create_blk_endpoint", 00:06:49.323 "vfu_virtio_delete_endpoint", 00:06:49.323 "keyring_file_remove_key", 00:06:49.323 "keyring_file_add_key", 00:06:49.323 "iscsi_get_histogram", 00:06:49.323 "iscsi_enable_histogram", 00:06:49.323 "iscsi_set_options", 00:06:49.323 "iscsi_get_auth_groups", 00:06:49.323 "iscsi_auth_group_remove_secret", 00:06:49.323 "iscsi_auth_group_add_secret", 00:06:49.323 "iscsi_delete_auth_group", 00:06:49.323 "iscsi_create_auth_group", 00:06:49.323 "iscsi_set_discovery_auth", 00:06:49.323 "iscsi_get_options", 00:06:49.323 "iscsi_target_node_request_logout", 00:06:49.323 "iscsi_target_node_set_redirect", 00:06:49.323 "iscsi_target_node_set_auth", 00:06:49.323 "iscsi_target_node_add_lun", 00:06:49.323 "iscsi_get_stats", 00:06:49.323 "iscsi_get_connections", 00:06:49.323 "iscsi_portal_group_set_auth", 00:06:49.323 "iscsi_start_portal_group", 00:06:49.323 "iscsi_delete_portal_group", 00:06:49.323 "iscsi_create_portal_group", 00:06:49.323 "iscsi_get_portal_groups", 00:06:49.323 "iscsi_delete_target_node", 00:06:49.323 "iscsi_target_node_remove_pg_ig_maps", 00:06:49.323 "iscsi_target_node_add_pg_ig_maps", 00:06:49.323 "iscsi_create_target_node", 00:06:49.323 "iscsi_get_target_nodes", 00:06:49.323 "iscsi_delete_initiator_group", 00:06:49.323 "iscsi_initiator_group_remove_initiators", 00:06:49.323 "iscsi_initiator_group_add_initiators", 00:06:49.323 "iscsi_create_initiator_group", 00:06:49.323 "iscsi_get_initiator_groups", 00:06:49.323 "nvmf_set_crdt", 00:06:49.323 "nvmf_set_config", 00:06:49.323 "nvmf_set_max_subsystems", 00:06:49.323 "nvmf_subsystem_get_listeners", 00:06:49.323 "nvmf_subsystem_get_qpairs", 00:06:49.323 "nvmf_subsystem_get_controllers", 00:06:49.323 "nvmf_get_stats", 00:06:49.323 "nvmf_get_transports", 00:06:49.323 "nvmf_create_transport", 00:06:49.323 "nvmf_get_targets", 00:06:49.323 "nvmf_delete_target", 00:06:49.323 "nvmf_create_target", 00:06:49.323 "nvmf_subsystem_allow_any_host", 00:06:49.323 "nvmf_subsystem_remove_host", 00:06:49.323 "nvmf_subsystem_add_host", 00:06:49.323 "nvmf_ns_remove_host", 00:06:49.323 "nvmf_ns_add_host", 00:06:49.323 "nvmf_subsystem_remove_ns", 00:06:49.323 "nvmf_subsystem_add_ns", 00:06:49.323 "nvmf_subsystem_listener_set_ana_state", 00:06:49.323 "nvmf_discovery_get_referrals", 00:06:49.323 "nvmf_discovery_remove_referral", 00:06:49.323 "nvmf_discovery_add_referral", 00:06:49.323 "nvmf_subsystem_remove_listener", 00:06:49.323 "nvmf_subsystem_add_listener", 00:06:49.323 "nvmf_delete_subsystem", 00:06:49.323 "nvmf_create_subsystem", 00:06:49.323 "nvmf_get_subsystems", 00:06:49.323 "env_dpdk_get_mem_stats", 00:06:49.323 "nbd_get_disks", 00:06:49.323 "nbd_stop_disk", 00:06:49.323 "nbd_start_disk", 00:06:49.323 "ublk_recover_disk", 00:06:49.323 "ublk_get_disks", 00:06:49.323 "ublk_stop_disk", 00:06:49.323 "ublk_start_disk", 00:06:49.323 "ublk_destroy_target", 00:06:49.323 "ublk_create_target", 00:06:49.323 "virtio_blk_create_transport", 00:06:49.323 "virtio_blk_get_transports", 00:06:49.323 "vhost_controller_set_coalescing", 00:06:49.323 "vhost_get_controllers", 00:06:49.323 "vhost_delete_controller", 00:06:49.323 "vhost_create_blk_controller", 00:06:49.323 "vhost_scsi_controller_remove_target", 00:06:49.323 "vhost_scsi_controller_add_target", 00:06:49.323 "vhost_start_scsi_controller", 00:06:49.323 "vhost_create_scsi_controller", 00:06:49.323 "thread_set_cpumask", 00:06:49.323 "framework_get_scheduler", 00:06:49.323 "framework_set_scheduler", 00:06:49.323 "framework_get_reactors", 00:06:49.323 "thread_get_io_channels", 00:06:49.323 "thread_get_pollers", 00:06:49.323 "thread_get_stats", 00:06:49.323 "framework_monitor_context_switch", 00:06:49.323 "spdk_kill_instance", 00:06:49.323 "log_enable_timestamps", 00:06:49.323 "log_get_flags", 00:06:49.323 "log_clear_flag", 00:06:49.323 "log_set_flag", 00:06:49.323 "log_get_level", 00:06:49.323 "log_set_level", 00:06:49.323 "log_get_print_level", 00:06:49.323 "log_set_print_level", 00:06:49.323 "framework_enable_cpumask_locks", 00:06:49.323 "framework_disable_cpumask_locks", 00:06:49.323 "framework_wait_init", 00:06:49.323 "framework_start_init", 00:06:49.323 "scsi_get_devices", 00:06:49.323 "bdev_get_histogram", 00:06:49.323 "bdev_enable_histogram", 00:06:49.323 "bdev_set_qos_limit", 00:06:49.323 "bdev_set_qd_sampling_period", 00:06:49.323 "bdev_get_bdevs", 00:06:49.323 "bdev_reset_iostat", 00:06:49.323 "bdev_get_iostat", 00:06:49.323 "bdev_examine", 00:06:49.323 "bdev_wait_for_examine", 00:06:49.323 "bdev_set_options", 00:06:49.323 "notify_get_notifications", 00:06:49.323 "notify_get_types", 00:06:49.323 "accel_get_stats", 00:06:49.323 "accel_set_options", 00:06:49.323 "accel_set_driver", 00:06:49.323 "accel_crypto_key_destroy", 00:06:49.323 "accel_crypto_keys_get", 00:06:49.323 "accel_crypto_key_create", 00:06:49.324 "accel_assign_opc", 00:06:49.324 "accel_get_module_info", 00:06:49.324 "accel_get_opc_assignments", 00:06:49.324 "vmd_rescan", 00:06:49.324 "vmd_remove_device", 00:06:49.324 "vmd_enable", 00:06:49.324 "sock_set_default_impl", 00:06:49.324 "sock_impl_set_options", 00:06:49.324 "sock_impl_get_options", 00:06:49.324 "iobuf_get_stats", 00:06:49.324 "iobuf_set_options", 00:06:49.324 "keyring_get_keys", 00:06:49.324 "framework_get_pci_devices", 00:06:49.324 "framework_get_config", 00:06:49.324 "framework_get_subsystems", 00:06:49.324 "vfu_tgt_set_base_path", 00:06:49.324 "trace_get_info", 00:06:49.324 "trace_get_tpoint_group_mask", 00:06:49.324 "trace_disable_tpoint_group", 00:06:49.324 "trace_enable_tpoint_group", 00:06:49.324 "trace_clear_tpoint_mask", 00:06:49.324 "trace_set_tpoint_mask", 00:06:49.324 "spdk_get_version", 00:06:49.324 "rpc_get_methods" 00:06:49.324 ] 00:06:49.324 05:02:26 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:49.324 05:02:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:49.324 05:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.582 05:02:26 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:49.582 05:02:26 -- spdkcli/tcp.sh@38 -- # killprocess 1766736 00:06:49.582 05:02:26 -- common/autotest_common.sh@936 -- # '[' -z 1766736 ']' 00:06:49.582 05:02:26 -- common/autotest_common.sh@940 -- # kill -0 1766736 00:06:49.582 05:02:26 -- common/autotest_common.sh@941 -- # uname 00:06:49.582 05:02:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.582 05:02:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1766736 00:06:49.582 05:02:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.582 05:02:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.582 05:02:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1766736' 00:06:49.582 killing process with pid 1766736 00:06:49.582 05:02:26 -- common/autotest_common.sh@955 -- # kill 1766736 00:06:49.582 05:02:26 -- common/autotest_common.sh@960 -- # wait 1766736 00:06:49.840 00:06:49.840 real 0m1.188s 00:06:49.840 user 0m2.105s 00:06:49.840 sys 0m0.437s 00:06:49.840 05:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.840 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:49.840 ************************************ 00:06:49.840 END TEST spdkcli_tcp 00:06:49.840 ************************************ 00:06:49.840 05:02:27 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.840 05:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.840 05:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.840 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.098 ************************************ 00:06:50.098 START TEST dpdk_mem_utility 00:06:50.098 ************************************ 00:06:50.098 05:02:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.098 * Looking for test storage... 00:06:50.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:50.098 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:50.098 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1766949 00:06:50.098 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.099 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1766949 00:06:50.099 05:02:27 -- common/autotest_common.sh@817 -- # '[' -z 1766949 ']' 00:06:50.099 05:02:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.099 05:02:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.099 05:02:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.099 05:02:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.099 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.099 [2024-04-24 05:02:27.256543] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:50.099 [2024-04-24 05:02:27.256658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766949 ] 00:06:50.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.099 [2024-04-24 05:02:27.287360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.099 [2024-04-24 05:02:27.317421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.356 [2024-04-24 05:02:27.407417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.615 05:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.615 05:02:27 -- common/autotest_common.sh@850 -- # return 0 00:06:50.615 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.615 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.615 05:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.615 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.615 { 00:06:50.615 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.615 } 00:06:50.615 05:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.615 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:50.615 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:50.615 1 heaps totaling size 814.000000 MiB 00:06:50.615 size: 814.000000 MiB heap id: 0 00:06:50.615 end heaps---------- 00:06:50.615 8 mempools totaling size 598.116089 MiB 00:06:50.615 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.615 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.615 size: 84.521057 MiB name: bdev_io_1766949 00:06:50.615 size: 51.011292 MiB name: evtpool_1766949 00:06:50.615 size: 50.003479 MiB name: msgpool_1766949 00:06:50.615 size: 21.763794 MiB name: PDU_Pool 00:06:50.615 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.615 size: 0.026123 MiB name: Session_Pool 00:06:50.615 end mempools------- 00:06:50.615 6 memzones totaling size 4.142822 MiB 00:06:50.615 size: 1.000366 MiB name: RG_ring_0_1766949 00:06:50.615 size: 1.000366 MiB name: RG_ring_1_1766949 00:06:50.615 size: 1.000366 MiB name: RG_ring_4_1766949 00:06:50.615 size: 1.000366 MiB name: RG_ring_5_1766949 00:06:50.615 size: 0.125366 MiB name: RG_ring_2_1766949 00:06:50.615 size: 0.015991 MiB name: RG_ring_3_1766949 00:06:50.615 end memzones------- 00:06:50.615 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.615 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:50.615 list of free elements. size: 12.519348 MiB 00:06:50.615 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:50.615 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:50.615 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:50.615 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:50.615 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:50.615 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:50.615 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:50.615 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:50.615 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:50.615 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:50.615 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:50.615 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:50.615 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:50.615 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:50.615 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:50.615 list of standard malloc elements. size: 199.218079 MiB 00:06:50.615 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:50.615 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:50.615 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:50.615 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:50.615 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:50.615 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:50.615 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:50.615 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.615 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:50.615 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:50.615 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:50.615 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:50.615 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:50.615 list of memzone associated elements. size: 602.262573 MiB 00:06:50.616 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:50.616 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.616 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:50.616 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.616 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:50.616 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1766949_0 00:06:50.616 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:50.616 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1766949_0 00:06:50.616 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:50.616 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1766949_0 00:06:50.616 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:50.616 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.616 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:50.616 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.616 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:50.616 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1766949 00:06:50.616 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:50.616 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1766949 00:06:50.616 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:50.616 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1766949 00:06:50.616 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:50.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.616 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:50.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.616 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:50.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.616 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:50.616 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.616 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:50.616 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1766949 00:06:50.616 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:50.616 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1766949 00:06:50.616 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:50.616 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1766949 00:06:50.616 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:50.616 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1766949 00:06:50.616 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:50.616 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1766949 00:06:50.616 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:50.616 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.616 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:50.616 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.616 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:50.616 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.616 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:50.616 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1766949 00:06:50.616 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:50.616 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.616 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:50.616 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.616 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:50.616 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1766949 00:06:50.616 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:50.616 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.616 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:50.616 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1766949 00:06:50.616 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:50.616 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1766949 00:06:50.616 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:50.616 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.616 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.616 05:02:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1766949 00:06:50.616 05:02:27 -- common/autotest_common.sh@936 -- # '[' -z 1766949 ']' 00:06:50.616 05:02:27 -- common/autotest_common.sh@940 -- # kill -0 1766949 00:06:50.616 05:02:27 -- common/autotest_common.sh@941 -- # uname 00:06:50.616 05:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.616 05:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1766949 00:06:50.616 05:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.616 05:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.616 05:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1766949' 00:06:50.616 killing process with pid 1766949 00:06:50.616 05:02:27 -- common/autotest_common.sh@955 -- # kill 1766949 00:06:50.616 05:02:27 -- common/autotest_common.sh@960 -- # wait 1766949 00:06:51.182 00:06:51.182 real 0m1.056s 00:06:51.182 user 0m1.037s 00:06:51.182 sys 0m0.418s 00:06:51.182 05:02:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.182 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.182 ************************************ 00:06:51.182 END TEST dpdk_mem_utility 00:06:51.182 ************************************ 00:06:51.182 05:02:28 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.182 05:02:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.182 05:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.182 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.182 ************************************ 00:06:51.182 START TEST event 00:06:51.182 ************************************ 00:06:51.182 05:02:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.182 * Looking for test storage... 00:06:51.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.182 05:02:28 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:51.182 05:02:28 -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.182 05:02:28 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.182 05:02:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.182 05:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.182 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.442 ************************************ 00:06:51.442 START TEST event_perf 00:06:51.442 ************************************ 00:06:51.442 05:02:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.442 Running I/O for 1 seconds...[2024-04-24 05:02:28.499304] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:51.442 [2024-04-24 05:02:28.499383] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767158 ] 00:06:51.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.442 [2024-04-24 05:02:28.531885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.442 [2024-04-24 05:02:28.564293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.442 [2024-04-24 05:02:28.658487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.442 [2024-04-24 05:02:28.658556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.442 [2024-04-24 05:02:28.658655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.442 [2024-04-24 05:02:28.658658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.817 Running I/O for 1 seconds... 00:06:52.817 lcore 0: 235151 00:06:52.817 lcore 1: 235151 00:06:52.817 lcore 2: 235151 00:06:52.817 lcore 3: 235152 00:06:52.817 done. 00:06:52.817 00:06:52.817 real 0m1.251s 00:06:52.817 user 0m4.156s 00:06:52.817 sys 0m0.091s 00:06:52.817 05:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.817 05:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.817 ************************************ 00:06:52.817 END TEST event_perf 00:06:52.817 ************************************ 00:06:52.817 05:02:29 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:52.817 05:02:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:52.817 05:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.817 05:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.817 ************************************ 00:06:52.817 START TEST event_reactor 00:06:52.817 ************************************ 00:06:52.817 05:02:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:52.817 [2024-04-24 05:02:29.871656] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:52.817 [2024-04-24 05:02:29.871732] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767433 ] 00:06:52.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.817 [2024-04-24 05:02:29.904176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.817 [2024-04-24 05:02:29.935155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.817 [2024-04-24 05:02:30.029880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.190 test_start 00:06:54.190 oneshot 00:06:54.190 tick 100 00:06:54.190 tick 100 00:06:54.190 tick 250 00:06:54.190 tick 100 00:06:54.190 tick 100 00:06:54.190 tick 100 00:06:54.190 tick 250 00:06:54.190 tick 500 00:06:54.190 tick 100 00:06:54.190 tick 100 00:06:54.190 tick 250 00:06:54.190 tick 100 00:06:54.190 tick 100 00:06:54.190 test_end 00:06:54.190 00:06:54.190 real 0m1.248s 00:06:54.190 user 0m1.159s 00:06:54.190 sys 0m0.084s 00:06:54.190 05:02:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.190 05:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.190 ************************************ 00:06:54.190 END TEST event_reactor 00:06:54.190 ************************************ 00:06:54.190 05:02:31 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.190 05:02:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:54.190 05:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.190 05:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.190 ************************************ 00:06:54.190 START TEST event_reactor_perf 00:06:54.190 ************************************ 00:06:54.191 05:02:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.191 [2024-04-24 05:02:31.240448] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:54.191 [2024-04-24 05:02:31.240514] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767600 ] 00:06:54.191 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.191 [2024-04-24 05:02:31.273000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.191 [2024-04-24 05:02:31.302858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.191 [2024-04-24 05:02:31.392674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.592 test_start 00:06:55.592 test_end 00:06:55.592 Performance: 352865 events per second 00:06:55.592 00:06:55.592 real 0m1.244s 00:06:55.592 user 0m1.157s 00:06:55.592 sys 0m0.082s 00:06:55.592 05:02:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.592 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 END TEST event_reactor_perf 00:06:55.592 ************************************ 00:06:55.592 05:02:32 -- event/event.sh@49 -- # uname -s 00:06:55.592 05:02:32 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.592 05:02:32 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:55.592 05:02:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.592 05:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.592 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 START TEST event_scheduler 00:06:55.592 ************************************ 00:06:55.592 05:02:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:55.592 * Looking for test storage... 00:06:55.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:55.592 05:02:32 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.592 05:02:32 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1767793 00:06:55.592 05:02:32 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.592 05:02:32 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.592 05:02:32 -- scheduler/scheduler.sh@37 -- # waitforlisten 1767793 00:06:55.592 05:02:32 -- common/autotest_common.sh@817 -- # '[' -z 1767793 ']' 00:06:55.592 05:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.592 05:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.592 05:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.592 05:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.592 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 [2024-04-24 05:02:32.684236] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:55.592 [2024-04-24 05:02:32.684308] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767793 ] 00:06:55.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.592 [2024-04-24 05:02:32.714857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.592 [2024-04-24 05:02:32.741842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.592 [2024-04-24 05:02:32.826578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.592 [2024-04-24 05:02:32.826657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.592 [2024-04-24 05:02:32.826707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.592 [2024-04-24 05:02:32.826710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.850 05:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:55.850 05:02:32 -- common/autotest_common.sh@850 -- # return 0 00:06:55.850 05:02:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:55.850 05:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.850 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.850 POWER: Env isn't set yet! 00:06:55.850 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:55.850 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:55.850 POWER: Cannot get available frequencies of lcore 0 00:06:55.850 POWER: Attempting to initialise PSTAT power management... 00:06:55.850 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:55.850 POWER: Initialized successfully for lcore 0 power management 00:06:55.850 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:55.850 POWER: Initialized successfully for lcore 1 power management 00:06:55.850 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:55.850 POWER: Initialized successfully for lcore 2 power management 00:06:55.850 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:55.850 POWER: Initialized successfully for lcore 3 power management 00:06:55.850 05:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.850 05:02:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:55.850 05:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.850 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.850 [2024-04-24 05:02:33.027751] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:55.850 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.850 05:02:33 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:55.850 05:02:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.850 05:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.850 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 ************************************ 00:06:56.108 START TEST scheduler_create_thread 00:06:56.108 ************************************ 00:06:56.108 05:02:33 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 2 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 3 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 4 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 5 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 6 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 7 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 8 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 9 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 10 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.108 05:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.108 05:02:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:56.108 05:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.108 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 05:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.481 05:02:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:57.481 05:02:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:57.481 05:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.481 05:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:58.855 05:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.855 00:06:58.855 real 0m2.618s 00:06:58.855 user 0m0.011s 00:06:58.855 sys 0m0.003s 00:06:58.855 05:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.855 05:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.855 ************************************ 00:06:58.855 END TEST scheduler_create_thread 00:06:58.855 ************************************ 00:06:58.855 05:02:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:58.855 05:02:35 -- scheduler/scheduler.sh@46 -- # killprocess 1767793 00:06:58.855 05:02:35 -- common/autotest_common.sh@936 -- # '[' -z 1767793 ']' 00:06:58.855 05:02:35 -- common/autotest_common.sh@940 -- # kill -0 1767793 00:06:58.855 05:02:35 -- common/autotest_common.sh@941 -- # uname 00:06:58.855 05:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.855 05:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1767793 00:06:58.855 05:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:58.855 05:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:58.855 05:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1767793' 00:06:58.855 killing process with pid 1767793 00:06:58.855 05:02:35 -- common/autotest_common.sh@955 -- # kill 1767793 00:06:58.855 05:02:35 -- common/autotest_common.sh@960 -- # wait 1767793 00:06:59.114 [2024-04-24 05:02:36.226939] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:59.114 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:59.114 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:59.114 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:59.114 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:59.114 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:59.114 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:59.114 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:59.114 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:59.373 00:06:59.373 real 0m3.872s 00:06:59.373 user 0m5.917s 00:06:59.373 sys 0m0.397s 00:06:59.373 05:02:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.373 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.373 ************************************ 00:06:59.373 END TEST event_scheduler 00:06:59.373 ************************************ 00:06:59.373 05:02:36 -- event/event.sh@51 -- # modprobe -n nbd 00:06:59.373 05:02:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:59.373 05:02:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.373 05:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.373 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.373 ************************************ 00:06:59.373 START TEST app_repeat 00:06:59.373 ************************************ 00:06:59.373 05:02:36 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:59.373 05:02:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.373 05:02:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.373 05:02:36 -- event/event.sh@13 -- # local nbd_list 00:06:59.373 05:02:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.373 05:02:36 -- event/event.sh@14 -- # local bdev_list 00:06:59.373 05:02:36 -- event/event.sh@15 -- # local repeat_times=4 00:06:59.373 05:02:36 -- event/event.sh@17 -- # modprobe nbd 00:06:59.373 05:02:36 -- event/event.sh@19 -- # repeat_pid=1768377 00:06:59.373 05:02:36 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:59.373 05:02:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.373 05:02:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1768377' 00:06:59.373 Process app_repeat pid: 1768377 00:06:59.373 05:02:36 -- event/event.sh@23 -- # for i in {0..2} 00:06:59.373 05:02:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:59.373 spdk_app_start Round 0 00:06:59.373 05:02:36 -- event/event.sh@25 -- # waitforlisten 1768377 /var/tmp/spdk-nbd.sock 00:06:59.373 05:02:36 -- common/autotest_common.sh@817 -- # '[' -z 1768377 ']' 00:06:59.373 05:02:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.373 05:02:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.373 05:02:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.373 05:02:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.373 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.373 [2024-04-24 05:02:36.616643] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:06:59.373 [2024-04-24 05:02:36.616708] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768377 ] 00:06:59.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.631 [2024-04-24 05:02:36.649080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.631 [2024-04-24 05:02:36.675060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.631 [2024-04-24 05:02:36.761394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.631 [2024-04-24 05:02:36.761397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.631 05:02:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:59.631 05:02:36 -- common/autotest_common.sh@850 -- # return 0 00:06:59.631 05:02:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.890 Malloc0 00:06:59.890 05:02:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.149 Malloc1 00:07:00.149 05:02:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@12 -- # local i 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.149 05:02:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.407 /dev/nbd0 00:07:00.407 05:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.407 05:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.407 05:02:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:00.407 05:02:37 -- common/autotest_common.sh@855 -- # local i 00:07:00.407 05:02:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:00.407 05:02:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:00.407 05:02:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:00.407 05:02:37 -- common/autotest_common.sh@859 -- # break 00:07:00.407 05:02:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:00.407 05:02:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:00.407 05:02:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.407 1+0 records in 00:07:00.407 1+0 records out 00:07:00.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163665 s, 25.0 MB/s 00:07:00.407 05:02:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.407 05:02:37 -- common/autotest_common.sh@872 -- # size=4096 00:07:00.407 05:02:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.407 05:02:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:00.407 05:02:37 -- common/autotest_common.sh@875 -- # return 0 00:07:00.407 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.407 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.407 05:02:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.667 /dev/nbd1 00:07:00.667 05:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.667 05:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.667 05:02:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:00.667 05:02:37 -- common/autotest_common.sh@855 -- # local i 00:07:00.667 05:02:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:00.667 05:02:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:00.667 05:02:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:00.667 05:02:37 -- common/autotest_common.sh@859 -- # break 00:07:00.667 05:02:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:00.667 05:02:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:00.667 05:02:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.925 1+0 records in 00:07:00.925 1+0 records out 00:07:00.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199619 s, 20.5 MB/s 00:07:00.925 05:02:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.925 05:02:37 -- common/autotest_common.sh@872 -- # size=4096 00:07:00.925 05:02:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.925 05:02:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:00.925 05:02:37 -- common/autotest_common.sh@875 -- # return 0 00:07:00.925 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.925 05:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.925 05:02:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.925 05:02:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.925 05:02:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.925 05:02:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.926 { 00:07:00.926 "nbd_device": "/dev/nbd0", 00:07:00.926 "bdev_name": "Malloc0" 00:07:00.926 }, 00:07:00.926 { 00:07:00.926 "nbd_device": "/dev/nbd1", 00:07:00.926 "bdev_name": "Malloc1" 00:07:00.926 } 00:07:00.926 ]' 00:07:00.926 05:02:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.926 { 00:07:00.926 "nbd_device": "/dev/nbd0", 00:07:00.926 "bdev_name": "Malloc0" 00:07:00.926 }, 00:07:00.926 { 00:07:00.926 "nbd_device": "/dev/nbd1", 00:07:00.926 "bdev_name": "Malloc1" 00:07:00.926 } 00:07:00.926 ]' 00:07:00.926 05:02:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.184 /dev/nbd1' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.184 /dev/nbd1' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.184 256+0 records in 00:07:01.184 256+0 records out 00:07:01.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483426 s, 217 MB/s 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.184 256+0 records in 00:07:01.184 256+0 records out 00:07:01.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205179 s, 51.1 MB/s 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.184 256+0 records in 00:07:01.184 256+0 records out 00:07:01.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243402 s, 43.1 MB/s 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@51 -- # local i 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.184 05:02:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@41 -- # break 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.443 05:02:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@41 -- # break 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.701 05:02:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@65 -- # true 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.959 05:02:39 -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.959 05:02:39 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.217 05:02:39 -- event/event.sh@35 -- # sleep 3 00:07:02.475 [2024-04-24 05:02:39.603110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.475 [2024-04-24 05:02:39.690792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.475 [2024-04-24 05:02:39.690797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.734 [2024-04-24 05:02:39.752741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.734 [2024-04-24 05:02:39.752798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.261 05:02:42 -- event/event.sh@23 -- # for i in {0..2} 00:07:05.261 05:02:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:05.261 spdk_app_start Round 1 00:07:05.261 05:02:42 -- event/event.sh@25 -- # waitforlisten 1768377 /var/tmp/spdk-nbd.sock 00:07:05.261 05:02:42 -- common/autotest_common.sh@817 -- # '[' -z 1768377 ']' 00:07:05.261 05:02:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.261 05:02:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.261 05:02:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.261 05:02:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.261 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.519 05:02:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.519 05:02:42 -- common/autotest_common.sh@850 -- # return 0 00:07:05.519 05:02:42 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.777 Malloc0 00:07:05.777 05:02:42 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.035 Malloc1 00:07:06.035 05:02:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@12 -- # local i 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.035 05:02:43 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.293 /dev/nbd0 00:07:06.293 05:02:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.293 05:02:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.293 05:02:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:06.293 05:02:43 -- common/autotest_common.sh@855 -- # local i 00:07:06.293 05:02:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:06.293 05:02:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:06.293 05:02:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:06.293 05:02:43 -- common/autotest_common.sh@859 -- # break 00:07:06.293 05:02:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:06.293 05:02:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:06.293 05:02:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.293 1+0 records in 00:07:06.293 1+0 records out 00:07:06.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021881 s, 18.7 MB/s 00:07:06.293 05:02:43 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.293 05:02:43 -- common/autotest_common.sh@872 -- # size=4096 00:07:06.293 05:02:43 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.293 05:02:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:06.293 05:02:43 -- common/autotest_common.sh@875 -- # return 0 00:07:06.293 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.293 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.293 05:02:43 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.551 /dev/nbd1 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.551 05:02:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:06.551 05:02:43 -- common/autotest_common.sh@855 -- # local i 00:07:06.551 05:02:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:06.551 05:02:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:06.551 05:02:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:06.551 05:02:43 -- common/autotest_common.sh@859 -- # break 00:07:06.551 05:02:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:06.551 05:02:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:06.551 05:02:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.551 1+0 records in 00:07:06.551 1+0 records out 00:07:06.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198904 s, 20.6 MB/s 00:07:06.551 05:02:43 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.551 05:02:43 -- common/autotest_common.sh@872 -- # size=4096 00:07:06.551 05:02:43 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.551 05:02:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:06.551 05:02:43 -- common/autotest_common.sh@875 -- # return 0 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.551 05:02:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.809 { 00:07:06.809 "nbd_device": "/dev/nbd0", 00:07:06.809 "bdev_name": "Malloc0" 00:07:06.809 }, 00:07:06.809 { 00:07:06.809 "nbd_device": "/dev/nbd1", 00:07:06.809 "bdev_name": "Malloc1" 00:07:06.809 } 00:07:06.809 ]' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.809 { 00:07:06.809 "nbd_device": "/dev/nbd0", 00:07:06.809 "bdev_name": "Malloc0" 00:07:06.809 }, 00:07:06.809 { 00:07:06.809 "nbd_device": "/dev/nbd1", 00:07:06.809 "bdev_name": "Malloc1" 00:07:06.809 } 00:07:06.809 ]' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.809 /dev/nbd1' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.809 /dev/nbd1' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.809 256+0 records in 00:07:06.809 256+0 records out 00:07:06.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499128 s, 210 MB/s 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.809 05:02:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.809 256+0 records in 00:07:06.809 256+0 records out 00:07:06.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240057 s, 43.7 MB/s 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.809 256+0 records in 00:07:06.809 256+0 records out 00:07:06.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244105 s, 43.0 MB/s 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@51 -- # local i 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.809 05:02:44 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@41 -- # break 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.101 05:02:44 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.358 05:02:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.359 05:02:44 -- bdev/nbd_common.sh@41 -- # break 00:07:07.359 05:02:44 -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.359 05:02:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.359 05:02:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.359 05:02:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@65 -- # true 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.616 05:02:44 -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.616 05:02:44 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.875 05:02:45 -- event/event.sh@35 -- # sleep 3 00:07:08.133 [2024-04-24 05:02:45.333741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.392 [2024-04-24 05:02:45.421120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.392 [2024-04-24 05:02:45.421124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.392 [2024-04-24 05:02:45.479494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.392 [2024-04-24 05:02:45.479568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.921 05:02:48 -- event/event.sh@23 -- # for i in {0..2} 00:07:10.921 05:02:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.921 spdk_app_start Round 2 00:07:10.921 05:02:48 -- event/event.sh@25 -- # waitforlisten 1768377 /var/tmp/spdk-nbd.sock 00:07:10.921 05:02:48 -- common/autotest_common.sh@817 -- # '[' -z 1768377 ']' 00:07:10.921 05:02:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.921 05:02:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.921 05:02:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.921 05:02:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.921 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.179 05:02:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.179 05:02:48 -- common/autotest_common.sh@850 -- # return 0 00:07:11.179 05:02:48 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.437 Malloc0 00:07:11.437 05:02:48 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.695 Malloc1 00:07:11.695 05:02:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@12 -- # local i 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.695 05:02:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.953 /dev/nbd0 00:07:11.953 05:02:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.953 05:02:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.953 05:02:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:11.953 05:02:49 -- common/autotest_common.sh@855 -- # local i 00:07:11.953 05:02:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:11.953 05:02:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:11.953 05:02:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:11.953 05:02:49 -- common/autotest_common.sh@859 -- # break 00:07:11.953 05:02:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:11.953 05:02:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:11.953 05:02:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.953 1+0 records in 00:07:11.953 1+0 records out 00:07:11.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181617 s, 22.6 MB/s 00:07:11.953 05:02:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.953 05:02:49 -- common/autotest_common.sh@872 -- # size=4096 00:07:11.953 05:02:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.953 05:02:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:11.953 05:02:49 -- common/autotest_common.sh@875 -- # return 0 00:07:11.953 05:02:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.953 05:02:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.953 05:02:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.211 /dev/nbd1 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.211 05:02:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:12.211 05:02:49 -- common/autotest_common.sh@855 -- # local i 00:07:12.211 05:02:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:12.211 05:02:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:12.211 05:02:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:12.211 05:02:49 -- common/autotest_common.sh@859 -- # break 00:07:12.211 05:02:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:12.211 05:02:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:12.211 05:02:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.211 1+0 records in 00:07:12.211 1+0 records out 00:07:12.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001968 s, 20.8 MB/s 00:07:12.211 05:02:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.211 05:02:49 -- common/autotest_common.sh@872 -- # size=4096 00:07:12.211 05:02:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.211 05:02:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:12.211 05:02:49 -- common/autotest_common.sh@875 -- # return 0 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.211 05:02:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.469 { 00:07:12.469 "nbd_device": "/dev/nbd0", 00:07:12.469 "bdev_name": "Malloc0" 00:07:12.469 }, 00:07:12.469 { 00:07:12.469 "nbd_device": "/dev/nbd1", 00:07:12.469 "bdev_name": "Malloc1" 00:07:12.469 } 00:07:12.469 ]' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.469 { 00:07:12.469 "nbd_device": "/dev/nbd0", 00:07:12.469 "bdev_name": "Malloc0" 00:07:12.469 }, 00:07:12.469 { 00:07:12.469 "nbd_device": "/dev/nbd1", 00:07:12.469 "bdev_name": "Malloc1" 00:07:12.469 } 00:07:12.469 ]' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.469 /dev/nbd1' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.469 /dev/nbd1' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.469 256+0 records in 00:07:12.469 256+0 records out 00:07:12.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500761 s, 209 MB/s 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.469 256+0 records in 00:07:12.469 256+0 records out 00:07:12.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206142 s, 50.9 MB/s 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.469 05:02:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.727 256+0 records in 00:07:12.727 256+0 records out 00:07:12.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248954 s, 42.1 MB/s 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@51 -- # local i 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.727 05:02:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@41 -- # break 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.986 05:02:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@41 -- # break 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.243 05:02:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@65 -- # true 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.501 05:02:50 -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.501 05:02:50 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.759 05:02:50 -- event/event.sh@35 -- # sleep 3 00:07:14.017 [2024-04-24 05:02:51.086955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.017 [2024-04-24 05:02:51.173720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.017 [2024-04-24 05:02:51.173725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.017 [2024-04-24 05:02:51.236487] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.017 [2024-04-24 05:02:51.236577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.296 05:02:53 -- event/event.sh@38 -- # waitforlisten 1768377 /var/tmp/spdk-nbd.sock 00:07:17.296 05:02:53 -- common/autotest_common.sh@817 -- # '[' -z 1768377 ']' 00:07:17.296 05:02:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.296 05:02:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.296 05:02:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.296 05:02:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.296 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.296 05:02:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.296 05:02:54 -- common/autotest_common.sh@850 -- # return 0 00:07:17.296 05:02:54 -- event/event.sh@39 -- # killprocess 1768377 00:07:17.296 05:02:54 -- common/autotest_common.sh@936 -- # '[' -z 1768377 ']' 00:07:17.296 05:02:54 -- common/autotest_common.sh@940 -- # kill -0 1768377 00:07:17.296 05:02:54 -- common/autotest_common.sh@941 -- # uname 00:07:17.296 05:02:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.296 05:02:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1768377 00:07:17.296 05:02:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.296 05:02:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.296 05:02:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1768377' 00:07:17.296 killing process with pid 1768377 00:07:17.296 05:02:54 -- common/autotest_common.sh@955 -- # kill 1768377 00:07:17.296 05:02:54 -- common/autotest_common.sh@960 -- # wait 1768377 00:07:17.296 spdk_app_start is called in Round 0. 00:07:17.296 Shutdown signal received, stop current app iteration 00:07:17.296 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 reinitialization... 00:07:17.296 spdk_app_start is called in Round 1. 00:07:17.296 Shutdown signal received, stop current app iteration 00:07:17.296 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 reinitialization... 00:07:17.296 spdk_app_start is called in Round 2. 00:07:17.296 Shutdown signal received, stop current app iteration 00:07:17.296 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 reinitialization... 00:07:17.296 spdk_app_start is called in Round 3. 00:07:17.296 Shutdown signal received, stop current app iteration 00:07:17.296 05:02:54 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:17.296 05:02:54 -- event/event.sh@42 -- # return 0 00:07:17.296 00:07:17.296 real 0m17.743s 00:07:17.296 user 0m38.514s 00:07:17.296 sys 0m3.198s 00:07:17.296 05:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.296 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.296 ************************************ 00:07:17.296 END TEST app_repeat 00:07:17.296 ************************************ 00:07:17.296 05:02:54 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:17.296 05:02:54 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:17.296 05:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.296 05:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.296 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.296 ************************************ 00:07:17.296 START TEST cpu_locks 00:07:17.296 ************************************ 00:07:17.296 05:02:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:17.296 * Looking for test storage... 00:07:17.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:17.296 05:02:54 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:17.296 05:02:54 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:17.296 05:02:54 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:17.296 05:02:54 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:17.296 05:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.296 05:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.296 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.555 ************************************ 00:07:17.555 START TEST default_locks 00:07:17.555 ************************************ 00:07:17.555 05:02:54 -- common/autotest_common.sh@1111 -- # default_locks 00:07:17.555 05:02:54 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1770742 00:07:17.555 05:02:54 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.555 05:02:54 -- event/cpu_locks.sh@47 -- # waitforlisten 1770742 00:07:17.555 05:02:54 -- common/autotest_common.sh@817 -- # '[' -z 1770742 ']' 00:07:17.555 05:02:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.555 05:02:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.555 05:02:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.555 05:02:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.555 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.555 [2024-04-24 05:02:54.645502] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:17.555 [2024-04-24 05:02:54.645579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770742 ] 00:07:17.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.555 [2024-04-24 05:02:54.680354] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.555 [2024-04-24 05:02:54.707413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.555 [2024-04-24 05:02:54.792124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.813 05:02:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.813 05:02:55 -- common/autotest_common.sh@850 -- # return 0 00:07:17.813 05:02:55 -- event/cpu_locks.sh@49 -- # locks_exist 1770742 00:07:17.813 05:02:55 -- event/cpu_locks.sh@22 -- # lslocks -p 1770742 00:07:17.813 05:02:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.379 lslocks: write error 00:07:18.379 05:02:55 -- event/cpu_locks.sh@50 -- # killprocess 1770742 00:07:18.379 05:02:55 -- common/autotest_common.sh@936 -- # '[' -z 1770742 ']' 00:07:18.379 05:02:55 -- common/autotest_common.sh@940 -- # kill -0 1770742 00:07:18.379 05:02:55 -- common/autotest_common.sh@941 -- # uname 00:07:18.379 05:02:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.379 05:02:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770742 00:07:18.379 05:02:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.379 05:02:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.379 05:02:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770742' 00:07:18.379 killing process with pid 1770742 00:07:18.379 05:02:55 -- common/autotest_common.sh@955 -- # kill 1770742 00:07:18.379 05:02:55 -- common/autotest_common.sh@960 -- # wait 1770742 00:07:18.637 05:02:55 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1770742 00:07:18.637 05:02:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:18.637 05:02:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1770742 00:07:18.637 05:02:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:18.637 05:02:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.637 05:02:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:18.637 05:02:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.637 05:02:55 -- common/autotest_common.sh@641 -- # waitforlisten 1770742 00:07:18.637 05:02:55 -- common/autotest_common.sh@817 -- # '[' -z 1770742 ']' 00:07:18.637 05:02:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.637 05:02:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.637 05:02:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.638 05:02:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.638 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1770742) - No such process 00:07:18.638 ERROR: process (pid: 1770742) is no longer running 00:07:18.638 05:02:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.638 05:02:55 -- common/autotest_common.sh@850 -- # return 1 00:07:18.638 05:02:55 -- common/autotest_common.sh@641 -- # es=1 00:07:18.638 05:02:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:18.638 05:02:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:18.638 05:02:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:18.638 05:02:55 -- event/cpu_locks.sh@54 -- # no_locks 00:07:18.638 05:02:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.638 05:02:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.638 05:02:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.638 00:07:18.638 real 0m1.298s 00:07:18.638 user 0m1.231s 00:07:18.638 sys 0m0.559s 00:07:18.638 05:02:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.638 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.638 ************************************ 00:07:18.638 END TEST default_locks 00:07:18.638 ************************************ 00:07:18.896 05:02:55 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:18.896 05:02:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.896 05:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.896 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.896 ************************************ 00:07:18.896 START TEST default_locks_via_rpc 00:07:18.896 ************************************ 00:07:18.896 05:02:56 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:07:18.896 05:02:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1770915 00:07:18.896 05:02:56 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.896 05:02:56 -- event/cpu_locks.sh@63 -- # waitforlisten 1770915 00:07:18.896 05:02:56 -- common/autotest_common.sh@817 -- # '[' -z 1770915 ']' 00:07:18.896 05:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.896 05:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.896 05:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.896 05:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.896 05:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.896 [2024-04-24 05:02:56.064814] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:18.896 [2024-04-24 05:02:56.064891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770915 ] 00:07:18.896 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.896 [2024-04-24 05:02:56.095926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.896 [2024-04-24 05:02:56.127917] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.155 [2024-04-24 05:02:56.214326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.441 05:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.441 05:02:56 -- common/autotest_common.sh@850 -- # return 0 00:07:19.441 05:02:56 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:19.441 05:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.441 05:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.441 05:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.441 05:02:56 -- event/cpu_locks.sh@67 -- # no_locks 00:07:19.441 05:02:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.441 05:02:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.441 05:02:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.441 05:02:56 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.441 05:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.441 05:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.441 05:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.441 05:02:56 -- event/cpu_locks.sh@71 -- # locks_exist 1770915 00:07:19.441 05:02:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1770915 00:07:19.441 05:02:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.699 05:02:56 -- event/cpu_locks.sh@73 -- # killprocess 1770915 00:07:19.699 05:02:56 -- common/autotest_common.sh@936 -- # '[' -z 1770915 ']' 00:07:19.699 05:02:56 -- common/autotest_common.sh@940 -- # kill -0 1770915 00:07:19.699 05:02:56 -- common/autotest_common.sh@941 -- # uname 00:07:19.699 05:02:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.699 05:02:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770915 00:07:19.699 05:02:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:19.699 05:02:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:19.699 05:02:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770915' 00:07:19.699 killing process with pid 1770915 00:07:19.699 05:02:56 -- common/autotest_common.sh@955 -- # kill 1770915 00:07:19.699 05:02:56 -- common/autotest_common.sh@960 -- # wait 1770915 00:07:20.003 00:07:20.003 real 0m1.143s 00:07:20.003 user 0m1.048s 00:07:20.003 sys 0m0.563s 00:07:20.003 05:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.003 05:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.003 ************************************ 00:07:20.003 END TEST default_locks_via_rpc 00:07:20.003 ************************************ 00:07:20.003 05:02:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:20.003 05:02:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.003 05:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.003 05:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.267 ************************************ 00:07:20.267 START TEST non_locking_app_on_locked_coremask 00:07:20.267 ************************************ 00:07:20.267 05:02:57 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:20.267 05:02:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1771083 00:07:20.267 05:02:57 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.267 05:02:57 -- event/cpu_locks.sh@81 -- # waitforlisten 1771083 /var/tmp/spdk.sock 00:07:20.267 05:02:57 -- common/autotest_common.sh@817 -- # '[' -z 1771083 ']' 00:07:20.267 05:02:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.267 05:02:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:20.267 05:02:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.267 05:02:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:20.267 05:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.267 [2024-04-24 05:02:57.336161] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:20.267 [2024-04-24 05:02:57.336245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771083 ] 00:07:20.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.267 [2024-04-24 05:02:57.369422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.267 [2024-04-24 05:02:57.399504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.267 [2024-04-24 05:02:57.493338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.525 05:02:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:20.525 05:02:57 -- common/autotest_common.sh@850 -- # return 0 00:07:20.525 05:02:57 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1771092 00:07:20.525 05:02:57 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:20.525 05:02:57 -- event/cpu_locks.sh@85 -- # waitforlisten 1771092 /var/tmp/spdk2.sock 00:07:20.525 05:02:57 -- common/autotest_common.sh@817 -- # '[' -z 1771092 ']' 00:07:20.525 05:02:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.525 05:02:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:20.525 05:02:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.525 05:02:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:20.525 05:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.783 [2024-04-24 05:02:57.799415] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:20.783 [2024-04-24 05:02:57.799489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771092 ] 00:07:20.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.783 [2024-04-24 05:02:57.835368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.783 [2024-04-24 05:02:57.898990] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.783 [2024-04-24 05:02:57.899032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.042 [2024-04-24 05:02:58.078285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.608 05:02:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:21.608 05:02:58 -- common/autotest_common.sh@850 -- # return 0 00:07:21.608 05:02:58 -- event/cpu_locks.sh@87 -- # locks_exist 1771083 00:07:21.608 05:02:58 -- event/cpu_locks.sh@22 -- # lslocks -p 1771083 00:07:21.608 05:02:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.175 lslocks: write error 00:07:22.175 05:02:59 -- event/cpu_locks.sh@89 -- # killprocess 1771083 00:07:22.175 05:02:59 -- common/autotest_common.sh@936 -- # '[' -z 1771083 ']' 00:07:22.175 05:02:59 -- common/autotest_common.sh@940 -- # kill -0 1771083 00:07:22.175 05:02:59 -- common/autotest_common.sh@941 -- # uname 00:07:22.175 05:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.175 05:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771083 00:07:22.175 05:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.175 05:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.175 05:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771083' 00:07:22.175 killing process with pid 1771083 00:07:22.175 05:02:59 -- common/autotest_common.sh@955 -- # kill 1771083 00:07:22.175 05:02:59 -- common/autotest_common.sh@960 -- # wait 1771083 00:07:23.109 05:03:00 -- event/cpu_locks.sh@90 -- # killprocess 1771092 00:07:23.109 05:03:00 -- common/autotest_common.sh@936 -- # '[' -z 1771092 ']' 00:07:23.109 05:03:00 -- common/autotest_common.sh@940 -- # kill -0 1771092 00:07:23.109 05:03:00 -- common/autotest_common.sh@941 -- # uname 00:07:23.109 05:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.109 05:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771092 00:07:23.109 05:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.109 05:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.109 05:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771092' 00:07:23.109 killing process with pid 1771092 00:07:23.109 05:03:00 -- common/autotest_common.sh@955 -- # kill 1771092 00:07:23.109 05:03:00 -- common/autotest_common.sh@960 -- # wait 1771092 00:07:23.368 00:07:23.368 real 0m3.183s 00:07:23.368 user 0m3.328s 00:07:23.368 sys 0m1.080s 00:07:23.368 05:03:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.368 05:03:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.368 ************************************ 00:07:23.368 END TEST non_locking_app_on_locked_coremask 00:07:23.368 ************************************ 00:07:23.368 05:03:00 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:23.368 05:03:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.368 05:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.368 05:03:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.368 ************************************ 00:07:23.368 START TEST locking_app_on_unlocked_coremask 00:07:23.368 ************************************ 00:07:23.368 05:03:00 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:23.368 05:03:00 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1771573 00:07:23.368 05:03:00 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:23.368 05:03:00 -- event/cpu_locks.sh@99 -- # waitforlisten 1771573 /var/tmp/spdk.sock 00:07:23.368 05:03:00 -- common/autotest_common.sh@817 -- # '[' -z 1771573 ']' 00:07:23.368 05:03:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.368 05:03:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.368 05:03:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.368 05:03:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.368 05:03:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 [2024-04-24 05:03:00.641188] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:23.627 [2024-04-24 05:03:00.641300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771573 ] 00:07:23.627 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.627 [2024-04-24 05:03:00.675365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.627 [2024-04-24 05:03:00.707307] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.627 [2024-04-24 05:03:00.707339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.627 [2024-04-24 05:03:00.795961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.885 05:03:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:23.885 05:03:01 -- common/autotest_common.sh@850 -- # return 0 00:07:23.885 05:03:01 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1771600 00:07:23.885 05:03:01 -- event/cpu_locks.sh@103 -- # waitforlisten 1771600 /var/tmp/spdk2.sock 00:07:23.885 05:03:01 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.885 05:03:01 -- common/autotest_common.sh@817 -- # '[' -z 1771600 ']' 00:07:23.885 05:03:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.885 05:03:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.885 05:03:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.885 05:03:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.885 05:03:01 -- common/autotest_common.sh@10 -- # set +x 00:07:23.885 [2024-04-24 05:03:01.100268] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:23.885 [2024-04-24 05:03:01.100354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771600 ] 00:07:23.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.885 [2024-04-24 05:03:01.139342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.143 [2024-04-24 05:03:01.198364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.143 [2024-04-24 05:03:01.384593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.077 05:03:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.077 05:03:02 -- common/autotest_common.sh@850 -- # return 0 00:07:25.077 05:03:02 -- event/cpu_locks.sh@105 -- # locks_exist 1771600 00:07:25.077 05:03:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1771600 00:07:25.077 05:03:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.334 lslocks: write error 00:07:25.334 05:03:02 -- event/cpu_locks.sh@107 -- # killprocess 1771573 00:07:25.334 05:03:02 -- common/autotest_common.sh@936 -- # '[' -z 1771573 ']' 00:07:25.334 05:03:02 -- common/autotest_common.sh@940 -- # kill -0 1771573 00:07:25.334 05:03:02 -- common/autotest_common.sh@941 -- # uname 00:07:25.334 05:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.334 05:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771573 00:07:25.334 05:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.334 05:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.334 05:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771573' 00:07:25.334 killing process with pid 1771573 00:07:25.334 05:03:02 -- common/autotest_common.sh@955 -- # kill 1771573 00:07:25.334 05:03:02 -- common/autotest_common.sh@960 -- # wait 1771573 00:07:26.267 05:03:03 -- event/cpu_locks.sh@108 -- # killprocess 1771600 00:07:26.268 05:03:03 -- common/autotest_common.sh@936 -- # '[' -z 1771600 ']' 00:07:26.268 05:03:03 -- common/autotest_common.sh@940 -- # kill -0 1771600 00:07:26.268 05:03:03 -- common/autotest_common.sh@941 -- # uname 00:07:26.268 05:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.268 05:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771600 00:07:26.268 05:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.268 05:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.268 05:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771600' 00:07:26.268 killing process with pid 1771600 00:07:26.268 05:03:03 -- common/autotest_common.sh@955 -- # kill 1771600 00:07:26.268 05:03:03 -- common/autotest_common.sh@960 -- # wait 1771600 00:07:26.526 00:07:26.526 real 0m3.110s 00:07:26.526 user 0m3.300s 00:07:26.526 sys 0m1.009s 00:07:26.526 05:03:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.526 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.526 ************************************ 00:07:26.526 END TEST locking_app_on_unlocked_coremask 00:07:26.526 ************************************ 00:07:26.526 05:03:03 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:26.526 05:03:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.526 05:03:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.526 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.785 ************************************ 00:07:26.785 START TEST locking_app_on_locked_coremask 00:07:26.785 ************************************ 00:07:26.785 05:03:03 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:26.785 05:03:03 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1772083 00:07:26.785 05:03:03 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.785 05:03:03 -- event/cpu_locks.sh@116 -- # waitforlisten 1772083 /var/tmp/spdk.sock 00:07:26.785 05:03:03 -- common/autotest_common.sh@817 -- # '[' -z 1772083 ']' 00:07:26.785 05:03:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.785 05:03:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:26.785 05:03:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.785 05:03:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:26.785 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.785 [2024-04-24 05:03:03.874934] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:26.785 [2024-04-24 05:03:03.875039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772083 ] 00:07:26.785 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.785 [2024-04-24 05:03:03.907298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.785 [2024-04-24 05:03:03.938841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.785 [2024-04-24 05:03:04.026648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.043 05:03:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:27.043 05:03:04 -- common/autotest_common.sh@850 -- # return 0 00:07:27.043 05:03:04 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1772086 00:07:27.043 05:03:04 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.043 05:03:04 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1772086 /var/tmp/spdk2.sock 00:07:27.043 05:03:04 -- common/autotest_common.sh@638 -- # local es=0 00:07:27.043 05:03:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1772086 /var/tmp/spdk2.sock 00:07:27.043 05:03:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:27.043 05:03:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:27.043 05:03:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:27.043 05:03:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:27.043 05:03:04 -- common/autotest_common.sh@641 -- # waitforlisten 1772086 /var/tmp/spdk2.sock 00:07:27.043 05:03:04 -- common/autotest_common.sh@817 -- # '[' -z 1772086 ']' 00:07:27.043 05:03:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.043 05:03:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:27.043 05:03:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.043 05:03:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:27.043 05:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.302 [2024-04-24 05:03:04.337813] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:27.302 [2024-04-24 05:03:04.337908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772086 ] 00:07:27.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.302 [2024-04-24 05:03:04.371410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.302 [2024-04-24 05:03:04.435080] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1772083 has claimed it. 00:07:27.302 [2024-04-24 05:03:04.435134] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1772086) - No such process 00:07:27.867 ERROR: process (pid: 1772086) is no longer running 00:07:27.867 05:03:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:27.867 05:03:05 -- common/autotest_common.sh@850 -- # return 1 00:07:27.867 05:03:05 -- common/autotest_common.sh@641 -- # es=1 00:07:27.867 05:03:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:27.867 05:03:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:27.867 05:03:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:27.867 05:03:05 -- event/cpu_locks.sh@122 -- # locks_exist 1772083 00:07:27.867 05:03:05 -- event/cpu_locks.sh@22 -- # lslocks -p 1772083 00:07:27.867 05:03:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.126 lslocks: write error 00:07:28.126 05:03:05 -- event/cpu_locks.sh@124 -- # killprocess 1772083 00:07:28.126 05:03:05 -- common/autotest_common.sh@936 -- # '[' -z 1772083 ']' 00:07:28.126 05:03:05 -- common/autotest_common.sh@940 -- # kill -0 1772083 00:07:28.126 05:03:05 -- common/autotest_common.sh@941 -- # uname 00:07:28.126 05:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.126 05:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772083 00:07:28.384 05:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.384 05:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.384 05:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772083' 00:07:28.384 killing process with pid 1772083 00:07:28.384 05:03:05 -- common/autotest_common.sh@955 -- # kill 1772083 00:07:28.384 05:03:05 -- common/autotest_common.sh@960 -- # wait 1772083 00:07:28.642 00:07:28.642 real 0m1.979s 00:07:28.642 user 0m2.093s 00:07:28.642 sys 0m0.669s 00:07:28.642 05:03:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.642 05:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:28.642 ************************************ 00:07:28.642 END TEST locking_app_on_locked_coremask 00:07:28.642 ************************************ 00:07:28.642 05:03:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:28.642 05:03:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.642 05:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.642 05:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:28.901 ************************************ 00:07:28.901 START TEST locking_overlapped_coremask 00:07:28.901 ************************************ 00:07:28.901 05:03:05 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:28.901 05:03:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1772377 00:07:28.901 05:03:05 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.901 05:03:05 -- event/cpu_locks.sh@133 -- # waitforlisten 1772377 /var/tmp/spdk.sock 00:07:28.901 05:03:05 -- common/autotest_common.sh@817 -- # '[' -z 1772377 ']' 00:07:28.901 05:03:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.901 05:03:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.901 05:03:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.901 05:03:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.901 05:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:28.901 [2024-04-24 05:03:05.973243] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:28.901 [2024-04-24 05:03:05.973362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772377 ] 00:07:28.901 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.901 [2024-04-24 05:03:06.006202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.901 [2024-04-24 05:03:06.033625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.901 [2024-04-24 05:03:06.121514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.901 [2024-04-24 05:03:06.121567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.901 [2024-04-24 05:03:06.121584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.160 05:03:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:29.160 05:03:06 -- common/autotest_common.sh@850 -- # return 0 00:07:29.160 05:03:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1772392 00:07:29.160 05:03:06 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.160 05:03:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1772392 /var/tmp/spdk2.sock 00:07:29.160 05:03:06 -- common/autotest_common.sh@638 -- # local es=0 00:07:29.160 05:03:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1772392 /var/tmp/spdk2.sock 00:07:29.160 05:03:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:29.160 05:03:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:29.160 05:03:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:29.160 05:03:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:29.160 05:03:06 -- common/autotest_common.sh@641 -- # waitforlisten 1772392 /var/tmp/spdk2.sock 00:07:29.160 05:03:06 -- common/autotest_common.sh@817 -- # '[' -z 1772392 ']' 00:07:29.160 05:03:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.160 05:03:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:29.160 05:03:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.161 05:03:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:29.161 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.161 [2024-04-24 05:03:06.420742] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:29.161 [2024-04-24 05:03:06.420823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772392 ] 00:07:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.419 [2024-04-24 05:03:06.454719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.419 [2024-04-24 05:03:06.508909] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1772377 has claimed it. 00:07:29.420 [2024-04-24 05:03:06.508965] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1772392) - No such process 00:07:29.986 ERROR: process (pid: 1772392) is no longer running 00:07:29.986 05:03:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:29.986 05:03:07 -- common/autotest_common.sh@850 -- # return 1 00:07:29.986 05:03:07 -- common/autotest_common.sh@641 -- # es=1 00:07:29.986 05:03:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:29.986 05:03:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:29.986 05:03:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:29.986 05:03:07 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:29.986 05:03:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.986 05:03:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.986 05:03:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.986 05:03:07 -- event/cpu_locks.sh@141 -- # killprocess 1772377 00:07:29.986 05:03:07 -- common/autotest_common.sh@936 -- # '[' -z 1772377 ']' 00:07:29.986 05:03:07 -- common/autotest_common.sh@940 -- # kill -0 1772377 00:07:29.986 05:03:07 -- common/autotest_common.sh@941 -- # uname 00:07:29.986 05:03:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.986 05:03:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772377 00:07:29.986 05:03:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.986 05:03:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.986 05:03:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772377' 00:07:29.986 killing process with pid 1772377 00:07:29.986 05:03:07 -- common/autotest_common.sh@955 -- # kill 1772377 00:07:29.986 05:03:07 -- common/autotest_common.sh@960 -- # wait 1772377 00:07:30.553 00:07:30.553 real 0m1.601s 00:07:30.553 user 0m4.331s 00:07:30.553 sys 0m0.460s 00:07:30.553 05:03:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.553 05:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.553 ************************************ 00:07:30.553 END TEST locking_overlapped_coremask 00:07:30.553 ************************************ 00:07:30.553 05:03:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:30.553 05:03:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.553 05:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.553 05:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.553 ************************************ 00:07:30.553 START TEST locking_overlapped_coremask_via_rpc 00:07:30.553 ************************************ 00:07:30.553 05:03:07 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:30.553 05:03:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1772816 00:07:30.553 05:03:07 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:30.553 05:03:07 -- event/cpu_locks.sh@149 -- # waitforlisten 1772816 /var/tmp/spdk.sock 00:07:30.553 05:03:07 -- common/autotest_common.sh@817 -- # '[' -z 1772816 ']' 00:07:30.553 05:03:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.553 05:03:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:30.553 05:03:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.553 05:03:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:30.553 05:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.553 [2024-04-24 05:03:07.695818] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:30.553 [2024-04-24 05:03:07.695904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772816 ] 00:07:30.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.553 [2024-04-24 05:03:07.728034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.553 [2024-04-24 05:03:07.754169] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.553 [2024-04-24 05:03:07.754204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.812 [2024-04-24 05:03:07.843828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.812 [2024-04-24 05:03:07.843886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.812 [2024-04-24 05:03:07.843890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.072 05:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:31.072 05:03:08 -- common/autotest_common.sh@850 -- # return 0 00:07:31.072 05:03:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1773051 00:07:31.072 05:03:08 -- event/cpu_locks.sh@153 -- # waitforlisten 1773051 /var/tmp/spdk2.sock 00:07:31.072 05:03:08 -- common/autotest_common.sh@817 -- # '[' -z 1773051 ']' 00:07:31.072 05:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.072 05:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:31.072 05:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.072 05:03:08 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:31.072 05:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:31.072 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:31.072 [2024-04-24 05:03:08.135072] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:31.072 [2024-04-24 05:03:08.135177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773051 ] 00:07:31.072 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.072 [2024-04-24 05:03:08.170694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.072 [2024-04-24 05:03:08.224804] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.072 [2024-04-24 05:03:08.224830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.330 [2024-04-24 05:03:08.396136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.330 [2024-04-24 05:03:08.399723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.330 [2024-04-24 05:03:08.399726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.896 05:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:31.896 05:03:09 -- common/autotest_common.sh@850 -- # return 0 00:07:31.896 05:03:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:31.896 05:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.896 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:31.896 05:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.896 05:03:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.896 05:03:09 -- common/autotest_common.sh@638 -- # local es=0 00:07:31.896 05:03:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.896 05:03:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:31.896 05:03:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.896 05:03:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:31.896 05:03:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.896 05:03:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.896 05:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.896 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:31.896 [2024-04-24 05:03:09.103727] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1772816 has claimed it. 00:07:31.896 request: 00:07:31.896 { 00:07:31.896 "method": "framework_enable_cpumask_locks", 00:07:31.896 "req_id": 1 00:07:31.896 } 00:07:31.896 Got JSON-RPC error response 00:07:31.896 response: 00:07:31.896 { 00:07:31.896 "code": -32603, 00:07:31.896 "message": "Failed to claim CPU core: 2" 00:07:31.896 } 00:07:31.896 05:03:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:31.896 05:03:09 -- common/autotest_common.sh@641 -- # es=1 00:07:31.896 05:03:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:31.896 05:03:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:31.896 05:03:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:31.896 05:03:09 -- event/cpu_locks.sh@158 -- # waitforlisten 1772816 /var/tmp/spdk.sock 00:07:31.896 05:03:09 -- common/autotest_common.sh@817 -- # '[' -z 1772816 ']' 00:07:31.896 05:03:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.896 05:03:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:31.896 05:03:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.896 05:03:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:31.896 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.154 05:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.154 05:03:09 -- common/autotest_common.sh@850 -- # return 0 00:07:32.154 05:03:09 -- event/cpu_locks.sh@159 -- # waitforlisten 1773051 /var/tmp/spdk2.sock 00:07:32.154 05:03:09 -- common/autotest_common.sh@817 -- # '[' -z 1773051 ']' 00:07:32.154 05:03:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.154 05:03:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.154 05:03:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.154 05:03:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.154 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.412 05:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.412 05:03:09 -- common/autotest_common.sh@850 -- # return 0 00:07:32.412 05:03:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.412 05:03:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.412 05:03:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.412 05:03:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.412 00:07:32.412 real 0m1.974s 00:07:32.412 user 0m1.018s 00:07:32.412 sys 0m0.201s 00:07:32.412 05:03:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.412 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.412 ************************************ 00:07:32.412 END TEST locking_overlapped_coremask_via_rpc 00:07:32.412 ************************************ 00:07:32.412 05:03:09 -- event/cpu_locks.sh@174 -- # cleanup 00:07:32.412 05:03:09 -- event/cpu_locks.sh@15 -- # [[ -z 1772816 ]] 00:07:32.412 05:03:09 -- event/cpu_locks.sh@15 -- # killprocess 1772816 00:07:32.412 05:03:09 -- common/autotest_common.sh@936 -- # '[' -z 1772816 ']' 00:07:32.412 05:03:09 -- common/autotest_common.sh@940 -- # kill -0 1772816 00:07:32.412 05:03:09 -- common/autotest_common.sh@941 -- # uname 00:07:32.412 05:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.412 05:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772816 00:07:32.412 05:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.412 05:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.412 05:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772816' 00:07:32.412 killing process with pid 1772816 00:07:32.412 05:03:09 -- common/autotest_common.sh@955 -- # kill 1772816 00:07:32.412 05:03:09 -- common/autotest_common.sh@960 -- # wait 1772816 00:07:32.978 05:03:10 -- event/cpu_locks.sh@16 -- # [[ -z 1773051 ]] 00:07:32.978 05:03:10 -- event/cpu_locks.sh@16 -- # killprocess 1773051 00:07:32.978 05:03:10 -- common/autotest_common.sh@936 -- # '[' -z 1773051 ']' 00:07:32.978 05:03:10 -- common/autotest_common.sh@940 -- # kill -0 1773051 00:07:32.978 05:03:10 -- common/autotest_common.sh@941 -- # uname 00:07:32.978 05:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.978 05:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1773051 00:07:32.978 05:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:32.978 05:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:32.978 05:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1773051' 00:07:32.978 killing process with pid 1773051 00:07:32.978 05:03:10 -- common/autotest_common.sh@955 -- # kill 1773051 00:07:32.978 05:03:10 -- common/autotest_common.sh@960 -- # wait 1773051 00:07:33.544 05:03:10 -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.544 05:03:10 -- event/cpu_locks.sh@1 -- # cleanup 00:07:33.544 05:03:10 -- event/cpu_locks.sh@15 -- # [[ -z 1772816 ]] 00:07:33.544 05:03:10 -- event/cpu_locks.sh@15 -- # killprocess 1772816 00:07:33.544 05:03:10 -- common/autotest_common.sh@936 -- # '[' -z 1772816 ']' 00:07:33.544 05:03:10 -- common/autotest_common.sh@940 -- # kill -0 1772816 00:07:33.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1772816) - No such process 00:07:33.544 05:03:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1772816 is not found' 00:07:33.544 Process with pid 1772816 is not found 00:07:33.544 05:03:10 -- event/cpu_locks.sh@16 -- # [[ -z 1773051 ]] 00:07:33.544 05:03:10 -- event/cpu_locks.sh@16 -- # killprocess 1773051 00:07:33.544 05:03:10 -- common/autotest_common.sh@936 -- # '[' -z 1773051 ']' 00:07:33.544 05:03:10 -- common/autotest_common.sh@940 -- # kill -0 1773051 00:07:33.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1773051) - No such process 00:07:33.544 05:03:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1773051 is not found' 00:07:33.545 Process with pid 1773051 is not found 00:07:33.545 05:03:10 -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.545 00:07:33.545 real 0m16.064s 00:07:33.545 user 0m27.419s 00:07:33.545 sys 0m5.690s 00:07:33.545 05:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.545 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 ************************************ 00:07:33.545 END TEST cpu_locks 00:07:33.545 ************************************ 00:07:33.545 00:07:33.545 real 0m42.213s 00:07:33.545 user 1m18.628s 00:07:33.545 sys 0m9.983s 00:07:33.545 05:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.545 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 ************************************ 00:07:33.545 END TEST event 00:07:33.545 ************************************ 00:07:33.545 05:03:10 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.545 05:03:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.545 05:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.545 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 ************************************ 00:07:33.545 START TEST thread 00:07:33.545 ************************************ 00:07:33.545 05:03:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.545 * Looking for test storage... 00:07:33.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:33.545 05:03:10 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.545 05:03:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:33.545 05:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.545 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 ************************************ 00:07:33.545 START TEST thread_poller_perf 00:07:33.545 ************************************ 00:07:33.545 05:03:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.803 [2024-04-24 05:03:10.819895] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:33.803 [2024-04-24 05:03:10.819955] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773587 ] 00:07:33.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.803 [2024-04-24 05:03:10.853048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.803 [2024-04-24 05:03:10.879957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.803 [2024-04-24 05:03:10.968685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.803 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:35.173 ====================================== 00:07:35.173 busy:2710364161 (cyc) 00:07:35.173 total_run_count: 299000 00:07:35.173 tsc_hz: 2700000000 (cyc) 00:07:35.173 ====================================== 00:07:35.173 poller_cost: 9064 (cyc), 3357 (nsec) 00:07:35.173 00:07:35.173 real 0m1.252s 00:07:35.173 user 0m1.165s 00:07:35.173 sys 0m0.081s 00:07:35.173 05:03:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.173 05:03:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.173 ************************************ 00:07:35.173 END TEST thread_poller_perf 00:07:35.173 ************************************ 00:07:35.173 05:03:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.173 05:03:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:35.173 05:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.173 05:03:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.173 ************************************ 00:07:35.173 START TEST thread_poller_perf 00:07:35.173 ************************************ 00:07:35.173 05:03:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.173 [2024-04-24 05:03:12.197819] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:35.173 [2024-04-24 05:03:12.197882] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773748 ] 00:07:35.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.173 [2024-04-24 05:03:12.229457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.173 [2024-04-24 05:03:12.261122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.173 [2024-04-24 05:03:12.350537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.173 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:36.543 ====================================== 00:07:36.543 busy:2702783658 (cyc) 00:07:36.543 total_run_count: 3846000 00:07:36.543 tsc_hz: 2700000000 (cyc) 00:07:36.543 ====================================== 00:07:36.543 poller_cost: 702 (cyc), 260 (nsec) 00:07:36.543 00:07:36.543 real 0m1.249s 00:07:36.543 user 0m1.156s 00:07:36.543 sys 0m0.087s 00:07:36.543 05:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.543 05:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.543 ************************************ 00:07:36.543 END TEST thread_poller_perf 00:07:36.543 ************************************ 00:07:36.543 05:03:13 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:36.543 00:07:36.543 real 0m2.800s 00:07:36.543 user 0m2.429s 00:07:36.543 sys 0m0.344s 00:07:36.543 05:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.543 05:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.543 ************************************ 00:07:36.543 END TEST thread 00:07:36.543 ************************************ 00:07:36.543 05:03:13 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:36.543 05:03:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.543 05:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.543 05:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.543 ************************************ 00:07:36.543 START TEST accel 00:07:36.543 ************************************ 00:07:36.543 05:03:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:36.543 * Looking for test storage... 00:07:36.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:36.543 05:03:13 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:36.543 05:03:13 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:36.543 05:03:13 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.543 05:03:13 -- accel/accel.sh@62 -- # spdk_tgt_pid=1774024 00:07:36.543 05:03:13 -- accel/accel.sh@63 -- # waitforlisten 1774024 00:07:36.543 05:03:13 -- common/autotest_common.sh@817 -- # '[' -z 1774024 ']' 00:07:36.543 05:03:13 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:36.543 05:03:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.543 05:03:13 -- accel/accel.sh@61 -- # build_accel_config 00:07:36.543 05:03:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:36.543 05:03:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.543 05:03:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.543 05:03:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.543 05:03:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:36.543 05:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.543 05:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.543 05:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.543 05:03:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.543 05:03:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.543 05:03:13 -- accel/accel.sh@41 -- # jq -r . 00:07:36.543 [2024-04-24 05:03:13.681872] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:36.543 [2024-04-24 05:03:13.681993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774024 ] 00:07:36.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.543 [2024-04-24 05:03:13.713389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.543 [2024-04-24 05:03:13.738974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.801 [2024-04-24 05:03:13.822046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.801 05:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:36.801 05:03:14 -- common/autotest_common.sh@850 -- # return 0 00:07:36.801 05:03:14 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:36.801 05:03:14 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:36.801 05:03:14 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:36.801 05:03:14 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:36.801 05:03:14 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:36.801 05:03:14 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:36.801 05:03:14 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:36.801 05:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.058 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.058 05:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.058 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.058 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.058 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.058 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.058 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.058 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.058 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.058 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.058 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.058 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.058 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # IFS== 00:07:37.059 05:03:14 -- accel/accel.sh@72 -- # read -r opc module 00:07:37.059 05:03:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.059 05:03:14 -- accel/accel.sh@75 -- # killprocess 1774024 00:07:37.059 05:03:14 -- common/autotest_common.sh@936 -- # '[' -z 1774024 ']' 00:07:37.059 05:03:14 -- common/autotest_common.sh@940 -- # kill -0 1774024 00:07:37.059 05:03:14 -- common/autotest_common.sh@941 -- # uname 00:07:37.059 05:03:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:37.059 05:03:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1774024 00:07:37.059 05:03:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:37.059 05:03:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:37.059 05:03:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1774024' 00:07:37.059 killing process with pid 1774024 00:07:37.059 05:03:14 -- common/autotest_common.sh@955 -- # kill 1774024 00:07:37.059 05:03:14 -- common/autotest_common.sh@960 -- # wait 1774024 00:07:37.317 05:03:14 -- accel/accel.sh@76 -- # trap - ERR 00:07:37.317 05:03:14 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:37.317 05:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:37.317 05:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.317 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 05:03:14 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:37.575 05:03:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:37.575 05:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.575 05:03:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.575 05:03:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.575 05:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.575 05:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.575 05:03:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.575 05:03:14 -- accel/accel.sh@40 -- # local IFS=, 00:07:37.575 05:03:14 -- accel/accel.sh@41 -- # jq -r . 00:07:37.575 05:03:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.575 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 05:03:14 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:37.575 05:03:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:37.575 05:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.575 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 ************************************ 00:07:37.575 START TEST accel_missing_filename 00:07:37.575 ************************************ 00:07:37.575 05:03:14 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:37.575 05:03:14 -- common/autotest_common.sh@638 -- # local es=0 00:07:37.575 05:03:14 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:37.575 05:03:14 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:37.575 05:03:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.575 05:03:14 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:37.575 05:03:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.575 05:03:14 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:37.575 05:03:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:37.575 05:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.575 05:03:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.575 05:03:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.576 05:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.576 05:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.576 05:03:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.576 05:03:14 -- accel/accel.sh@40 -- # local IFS=, 00:07:37.576 05:03:14 -- accel/accel.sh@41 -- # jq -r . 00:07:37.576 [2024-04-24 05:03:14.801761] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:37.576 [2024-04-24 05:03:14.801817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774245 ] 00:07:37.576 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.576 [2024-04-24 05:03:14.834652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.834 [2024-04-24 05:03:14.866747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.834 [2024-04-24 05:03:14.957829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.834 [2024-04-24 05:03:15.019367] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.092 [2024-04-24 05:03:15.107410] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:38.092 A filename is required. 00:07:38.092 05:03:15 -- common/autotest_common.sh@641 -- # es=234 00:07:38.092 05:03:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:38.092 05:03:15 -- common/autotest_common.sh@650 -- # es=106 00:07:38.092 05:03:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:38.092 05:03:15 -- common/autotest_common.sh@658 -- # es=1 00:07:38.092 05:03:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:38.092 00:07:38.092 real 0m0.399s 00:07:38.092 user 0m0.293s 00:07:38.092 sys 0m0.141s 00:07:38.092 05:03:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.092 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.092 ************************************ 00:07:38.092 END TEST accel_missing_filename 00:07:38.092 ************************************ 00:07:38.092 05:03:15 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.092 05:03:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:38.092 05:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.092 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.092 ************************************ 00:07:38.092 START TEST accel_compress_verify 00:07:38.092 ************************************ 00:07:38.092 05:03:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.092 05:03:15 -- common/autotest_common.sh@638 -- # local es=0 00:07:38.092 05:03:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.092 05:03:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:38.092 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.092 05:03:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:38.092 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.092 05:03:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.092 05:03:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.092 05:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.092 05:03:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.092 05:03:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.092 05:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.092 05:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.092 05:03:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.092 05:03:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.092 05:03:15 -- accel/accel.sh@41 -- # jq -r . 00:07:38.092 [2024-04-24 05:03:15.318655] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:38.092 [2024-04-24 05:03:15.318731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774288 ] 00:07:38.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.092 [2024-04-24 05:03:15.350772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.350 [2024-04-24 05:03:15.380691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.350 [2024-04-24 05:03:15.471540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.350 [2024-04-24 05:03:15.532827] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.350 [2024-04-24 05:03:15.620568] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:38.608 00:07:38.608 Compression does not support the verify option, aborting. 00:07:38.608 05:03:15 -- common/autotest_common.sh@641 -- # es=161 00:07:38.608 05:03:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:38.608 05:03:15 -- common/autotest_common.sh@650 -- # es=33 00:07:38.608 05:03:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:38.608 05:03:15 -- common/autotest_common.sh@658 -- # es=1 00:07:38.608 05:03:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:38.608 00:07:38.608 real 0m0.403s 00:07:38.608 user 0m0.294s 00:07:38.608 sys 0m0.142s 00:07:38.608 05:03:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.608 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.608 ************************************ 00:07:38.608 END TEST accel_compress_verify 00:07:38.608 ************************************ 00:07:38.608 05:03:15 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:38.608 05:03:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.608 05:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.608 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.608 ************************************ 00:07:38.608 START TEST accel_wrong_workload 00:07:38.608 ************************************ 00:07:38.608 05:03:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:38.608 05:03:15 -- common/autotest_common.sh@638 -- # local es=0 00:07:38.608 05:03:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:38.608 05:03:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:38.608 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.608 05:03:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:38.608 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.608 05:03:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:38.608 05:03:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:38.608 05:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.608 05:03:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.608 05:03:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.608 05:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.608 05:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.608 05:03:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.608 05:03:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.608 05:03:15 -- accel/accel.sh@41 -- # jq -r . 00:07:38.608 Unsupported workload type: foobar 00:07:38.608 [2024-04-24 05:03:15.832239] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:38.608 accel_perf options: 00:07:38.608 [-h help message] 00:07:38.608 [-q queue depth per core] 00:07:38.608 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.608 [-T number of threads per core 00:07:38.608 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.608 [-t time in seconds] 00:07:38.608 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.608 [ dif_verify, , dif_generate, dif_generate_copy 00:07:38.608 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.608 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.608 [-S for crc32c workload, use this seed value (default 0) 00:07:38.608 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.608 [-f for fill workload, use this BYTE value (default 255) 00:07:38.608 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.608 [-y verify result if this switch is on] 00:07:38.608 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.608 Can be used to spread operations across a wider range of memory. 00:07:38.608 05:03:15 -- common/autotest_common.sh@641 -- # es=1 00:07:38.608 05:03:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:38.608 05:03:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:38.608 05:03:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:38.608 00:07:38.608 real 0m0.021s 00:07:38.608 user 0m0.014s 00:07:38.608 sys 0m0.006s 00:07:38.608 05:03:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.608 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.608 ************************************ 00:07:38.608 END TEST accel_wrong_workload 00:07:38.608 ************************************ 00:07:38.608 Error: writing output failed: Broken pipe 00:07:38.608 05:03:15 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.608 05:03:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:38.608 05:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.608 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.867 ************************************ 00:07:38.867 START TEST accel_negative_buffers 00:07:38.867 ************************************ 00:07:38.867 05:03:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.867 05:03:15 -- common/autotest_common.sh@638 -- # local es=0 00:07:38.867 05:03:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:38.867 05:03:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:38.867 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.867 05:03:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:38.867 05:03:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:38.867 05:03:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:38.867 05:03:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:38.867 05:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.867 05:03:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.867 05:03:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.867 05:03:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.867 05:03:15 -- accel/accel.sh@41 -- # jq -r . 00:07:38.867 -x option must be non-negative. 00:07:38.867 [2024-04-24 05:03:15.972846] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:38.867 accel_perf options: 00:07:38.867 [-h help message] 00:07:38.867 [-q queue depth per core] 00:07:38.867 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.867 [-T number of threads per core 00:07:38.867 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.867 [-t time in seconds] 00:07:38.867 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.867 [ dif_verify, , dif_generate, dif_generate_copy 00:07:38.867 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.867 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.867 [-S for crc32c workload, use this seed value (default 0) 00:07:38.867 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.867 [-f for fill workload, use this BYTE value (default 255) 00:07:38.867 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.867 [-y verify result if this switch is on] 00:07:38.867 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.867 Can be used to spread operations across a wider range of memory. 00:07:38.867 05:03:15 -- common/autotest_common.sh@641 -- # es=1 00:07:38.867 05:03:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:38.867 05:03:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:38.867 05:03:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:38.867 00:07:38.867 real 0m0.021s 00:07:38.867 user 0m0.015s 00:07:38.867 sys 0m0.006s 00:07:38.867 05:03:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.867 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.867 ************************************ 00:07:38.867 END TEST accel_negative_buffers 00:07:38.867 ************************************ 00:07:38.867 Error: writing output failed: Broken pipe 00:07:38.867 05:03:15 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:38.867 05:03:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:38.867 05:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.867 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.867 ************************************ 00:07:38.867 START TEST accel_crc32c 00:07:38.867 ************************************ 00:07:38.867 05:03:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:38.867 05:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.867 05:03:16 -- accel/accel.sh@17 -- # local accel_module 00:07:38.867 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:38.867 05:03:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:38.867 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:38.867 05:03:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:38.867 05:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.867 05:03:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.867 05:03:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.867 05:03:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.867 05:03:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.867 05:03:16 -- accel/accel.sh@41 -- # jq -r . 00:07:38.867 [2024-04-24 05:03:16.110856] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:38.867 [2024-04-24 05:03:16.110918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774487 ] 00:07:39.126 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.126 [2024-04-24 05:03:16.143927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.126 [2024-04-24 05:03:16.173998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.126 [2024-04-24 05:03:16.265003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=0x1 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=crc32c 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=32 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=software 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=32 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=32 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=1 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val=Yes 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.126 05:03:16 -- accel/accel.sh@20 -- # val= 00:07:39.126 05:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # IFS=: 00:07:39.126 05:03:16 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.500 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.500 05:03:17 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:40.500 05:03:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.500 00:07:40.500 real 0m1.398s 00:07:40.500 user 0m1.263s 00:07:40.500 sys 0m0.137s 00:07:40.500 05:03:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:40.500 05:03:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.500 ************************************ 00:07:40.500 END TEST accel_crc32c 00:07:40.500 ************************************ 00:07:40.500 05:03:17 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:40.500 05:03:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:40.500 05:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.500 05:03:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.500 ************************************ 00:07:40.500 START TEST accel_crc32c_C2 00:07:40.500 ************************************ 00:07:40.500 05:03:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:40.500 05:03:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.500 05:03:17 -- accel/accel.sh@17 -- # local accel_module 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.500 05:03:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:40.500 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.500 05:03:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:40.500 05:03:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.500 05:03:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.500 05:03:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.500 05:03:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.500 05:03:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.500 05:03:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.500 05:03:17 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.500 05:03:17 -- accel/accel.sh@41 -- # jq -r . 00:07:40.500 [2024-04-24 05:03:17.627883] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:40.500 [2024-04-24 05:03:17.627966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774658 ] 00:07:40.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.500 [2024-04-24 05:03:17.660338] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.500 [2024-04-24 05:03:17.691034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.759 [2024-04-24 05:03:17.783176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=0x1 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=crc32c 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=0 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=software 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=32 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=32 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=1 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val=Yes 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:40.759 05:03:17 -- accel/accel.sh@20 -- # val= 00:07:40.759 05:03:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # IFS=: 00:07:40.759 05:03:17 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.132 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.132 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.132 05:03:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.132 05:03:19 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:42.132 05:03:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.132 00:07:42.132 real 0m1.411s 00:07:42.132 user 0m1.271s 00:07:42.132 sys 0m0.143s 00:07:42.132 05:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.132 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 ************************************ 00:07:42.132 END TEST accel_crc32c_C2 00:07:42.133 ************************************ 00:07:42.133 05:03:19 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:42.133 05:03:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:42.133 05:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.133 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:42.133 ************************************ 00:07:42.133 START TEST accel_copy 00:07:42.133 ************************************ 00:07:42.133 05:03:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:42.133 05:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.133 05:03:19 -- accel/accel.sh@17 -- # local accel_module 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:42.133 05:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.133 05:03:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.133 05:03:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.133 05:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.133 05:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.133 05:03:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.133 05:03:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:42.133 05:03:19 -- accel/accel.sh@41 -- # jq -r . 00:07:42.133 [2024-04-24 05:03:19.155424] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:42.133 [2024-04-24 05:03:19.155492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774821 ] 00:07:42.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.133 [2024-04-24 05:03:19.187596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.133 [2024-04-24 05:03:19.217743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.133 [2024-04-24 05:03:19.310062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=0x1 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=copy 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=software 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=32 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=32 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=1 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val=Yes 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.133 05:03:19 -- accel/accel.sh@20 -- # val= 00:07:42.133 05:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # IFS=: 00:07:42.133 05:03:19 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.517 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.517 05:03:20 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:43.517 05:03:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.517 00:07:43.517 real 0m1.405s 00:07:43.517 user 0m1.260s 00:07:43.517 sys 0m0.146s 00:07:43.517 05:03:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.517 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.517 ************************************ 00:07:43.517 END TEST accel_copy 00:07:43.517 ************************************ 00:07:43.517 05:03:20 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.517 05:03:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.517 05:03:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.517 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.517 ************************************ 00:07:43.517 START TEST accel_fill 00:07:43.517 ************************************ 00:07:43.517 05:03:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.517 05:03:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.517 05:03:20 -- accel/accel.sh@17 -- # local accel_module 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.517 05:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.517 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.517 05:03:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.517 05:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.517 05:03:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.518 05:03:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.518 05:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.518 05:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.518 05:03:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.518 05:03:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.518 05:03:20 -- accel/accel.sh@41 -- # jq -r . 00:07:43.518 [2024-04-24 05:03:20.680075] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:43.518 [2024-04-24 05:03:20.680139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775096 ] 00:07:43.518 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.518 [2024-04-24 05:03:20.712249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.518 [2024-04-24 05:03:20.744033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.776 [2024-04-24 05:03:20.834607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.776 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.776 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.776 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.776 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.776 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.776 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=0x1 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=fill 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=0x80 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=software 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=64 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=64 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=1 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val=Yes 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:43.777 05:03:20 -- accel/accel.sh@20 -- # val= 00:07:43.777 05:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # IFS=: 00:07:43.777 05:03:20 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.152 05:03:22 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:45.152 05:03:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.152 00:07:45.152 real 0m1.404s 00:07:45.152 user 0m1.262s 00:07:45.152 sys 0m0.144s 00:07:45.152 05:03:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.152 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.152 ************************************ 00:07:45.152 END TEST accel_fill 00:07:45.152 ************************************ 00:07:45.152 05:03:22 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:45.152 05:03:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:45.152 05:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.152 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.152 ************************************ 00:07:45.152 START TEST accel_copy_crc32c 00:07:45.152 ************************************ 00:07:45.152 05:03:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:45.152 05:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.152 05:03:22 -- accel/accel.sh@17 -- # local accel_module 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:45.152 05:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.152 05:03:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.152 05:03:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.152 05:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.152 05:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.152 05:03:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.152 05:03:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.152 05:03:22 -- accel/accel.sh@41 -- # jq -r . 00:07:45.152 [2024-04-24 05:03:22.197258] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:45.152 [2024-04-24 05:03:22.197321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775270 ] 00:07:45.152 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.152 [2024-04-24 05:03:22.229263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.152 [2024-04-24 05:03:22.259216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.152 [2024-04-24 05:03:22.350146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val=0x1 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.152 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.152 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.152 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=0 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=software 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=32 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=32 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=1 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val=Yes 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.153 05:03:22 -- accel/accel.sh@20 -- # val= 00:07:45.153 05:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # IFS=: 00:07:45.153 05:03:22 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.527 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.527 05:03:23 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:46.527 05:03:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.527 00:07:46.527 real 0m1.406s 00:07:46.527 user 0m1.262s 00:07:46.527 sys 0m0.146s 00:07:46.527 05:03:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.527 05:03:23 -- common/autotest_common.sh@10 -- # set +x 00:07:46.527 ************************************ 00:07:46.527 END TEST accel_copy_crc32c 00:07:46.527 ************************************ 00:07:46.527 05:03:23 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:46.527 05:03:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:46.527 05:03:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.527 05:03:23 -- common/autotest_common.sh@10 -- # set +x 00:07:46.527 ************************************ 00:07:46.527 START TEST accel_copy_crc32c_C2 00:07:46.527 ************************************ 00:07:46.527 05:03:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:46.527 05:03:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.527 05:03:23 -- accel/accel.sh@17 -- # local accel_module 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.527 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.527 05:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:46.527 05:03:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:46.527 05:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.527 05:03:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.527 05:03:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.527 05:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.527 05:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.527 05:03:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.527 05:03:23 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.527 05:03:23 -- accel/accel.sh@41 -- # jq -r . 00:07:46.527 [2024-04-24 05:03:23.724946] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:46.527 [2024-04-24 05:03:23.725039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775430 ] 00:07:46.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.527 [2024-04-24 05:03:23.758733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.527 [2024-04-24 05:03:23.788957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.785 [2024-04-24 05:03:23.882457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val=0x1 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.785 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.785 05:03:23 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:46.785 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.785 05:03:23 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=0 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=software 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=32 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=32 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=1 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val=Yes 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:46.786 05:03:23 -- accel/accel.sh@20 -- # val= 00:07:46.786 05:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # IFS=: 00:07:46.786 05:03:23 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.160 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.160 05:03:25 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:48.160 05:03:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.160 00:07:48.160 real 0m1.407s 00:07:48.160 user 0m1.265s 00:07:48.160 sys 0m0.144s 00:07:48.160 05:03:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.160 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:48.160 ************************************ 00:07:48.160 END TEST accel_copy_crc32c_C2 00:07:48.160 ************************************ 00:07:48.160 05:03:25 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:48.160 05:03:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:48.160 05:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.160 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:48.160 ************************************ 00:07:48.160 START TEST accel_dualcast 00:07:48.160 ************************************ 00:07:48.160 05:03:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:48.160 05:03:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.160 05:03:25 -- accel/accel.sh@17 -- # local accel_module 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.160 05:03:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:48.160 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.160 05:03:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:48.160 05:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.160 05:03:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.160 05:03:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.160 05:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.160 05:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.160 05:03:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.160 05:03:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.160 05:03:25 -- accel/accel.sh@41 -- # jq -r . 00:07:48.160 [2024-04-24 05:03:25.249807] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:48.160 [2024-04-24 05:03:25.249866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775707 ] 00:07:48.160 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.160 [2024-04-24 05:03:25.283449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.160 [2024-04-24 05:03:25.313603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.160 [2024-04-24 05:03:25.404379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=0x1 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=dualcast 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=software 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=32 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=32 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=1 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val=Yes 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:48.419 05:03:25 -- accel/accel.sh@20 -- # val= 00:07:48.419 05:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.419 05:03:25 -- accel/accel.sh@19 -- # IFS=: 00:07:48.420 05:03:25 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.793 05:03:26 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:49.793 05:03:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.793 00:07:49.793 real 0m1.409s 00:07:49.793 user 0m1.266s 00:07:49.793 sys 0m0.145s 00:07:49.793 05:03:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.793 05:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 ************************************ 00:07:49.793 END TEST accel_dualcast 00:07:49.793 ************************************ 00:07:49.793 05:03:26 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:49.793 05:03:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:49.793 05:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.793 05:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 ************************************ 00:07:49.793 START TEST accel_compare 00:07:49.793 ************************************ 00:07:49.793 05:03:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:49.793 05:03:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.793 05:03:26 -- accel/accel.sh@17 -- # local accel_module 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:49.793 05:03:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:49.793 05:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.793 05:03:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.793 05:03:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.793 05:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.793 05:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.793 05:03:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.793 05:03:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.793 05:03:26 -- accel/accel.sh@41 -- # jq -r . 00:07:49.793 [2024-04-24 05:03:26.775699] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:49.793 [2024-04-24 05:03:26.775758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775883 ] 00:07:49.793 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.793 [2024-04-24 05:03:26.808025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.793 [2024-04-24 05:03:26.838105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.793 [2024-04-24 05:03:26.929052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val=0x1 00:07:49.793 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.793 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.793 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=compare 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=software 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=32 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=32 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=1 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val=Yes 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:49.794 05:03:26 -- accel/accel.sh@20 -- # val= 00:07:49.794 05:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # IFS=: 00:07:49.794 05:03:26 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.167 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.167 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.167 05:03:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.167 05:03:28 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:51.167 05:03:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.167 00:07:51.167 real 0m1.406s 00:07:51.167 user 0m1.256s 00:07:51.167 sys 0m0.151s 00:07:51.167 05:03:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.167 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 ************************************ 00:07:51.167 END TEST accel_compare 00:07:51.167 ************************************ 00:07:51.167 05:03:28 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:51.167 05:03:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:51.167 05:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.167 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 ************************************ 00:07:51.168 START TEST accel_xor 00:07:51.168 ************************************ 00:07:51.168 05:03:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:51.168 05:03:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.168 05:03:28 -- accel/accel.sh@17 -- # local accel_module 00:07:51.168 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.168 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.168 05:03:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:51.168 05:03:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:51.168 05:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.168 05:03:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.168 05:03:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.168 05:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.168 05:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.168 05:03:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.168 05:03:28 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.168 05:03:28 -- accel/accel.sh@41 -- # jq -r . 00:07:51.168 [2024-04-24 05:03:28.309958] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:51.168 [2024-04-24 05:03:28.310017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776051 ] 00:07:51.168 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.168 [2024-04-24 05:03:28.342358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.168 [2024-04-24 05:03:28.374301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.426 [2024-04-24 05:03:28.468125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.426 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.426 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.426 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.426 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.426 05:03:28 -- accel/accel.sh@20 -- # val=0x1 00:07:51.426 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.426 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.426 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.426 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.426 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=xor 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=2 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=software 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@22 -- # accel_module=software 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=32 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=32 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=1 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val=Yes 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:51.427 05:03:28 -- accel/accel.sh@20 -- # val= 00:07:51.427 05:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # IFS=: 00:07:51.427 05:03:28 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.803 05:03:29 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:52.803 05:03:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.803 00:07:52.803 real 0m1.412s 00:07:52.803 user 0m1.262s 00:07:52.803 sys 0m0.152s 00:07:52.803 05:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.803 05:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.803 ************************************ 00:07:52.803 END TEST accel_xor 00:07:52.803 ************************************ 00:07:52.803 05:03:29 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:52.803 05:03:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:52.803 05:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.803 05:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.803 ************************************ 00:07:52.803 START TEST accel_xor 00:07:52.803 ************************************ 00:07:52.803 05:03:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:52.803 05:03:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.803 05:03:29 -- accel/accel.sh@17 -- # local accel_module 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:52.803 05:03:29 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:52.803 05:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.803 05:03:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.803 05:03:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.803 05:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.803 05:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.803 05:03:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.803 05:03:29 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.803 05:03:29 -- accel/accel.sh@41 -- # jq -r . 00:07:52.803 [2024-04-24 05:03:29.838924] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:52.803 [2024-04-24 05:03:29.839000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776322 ] 00:07:52.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.803 [2024-04-24 05:03:29.872412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.803 [2024-04-24 05:03:29.904611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.803 [2024-04-24 05:03:29.995584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val=0x1 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val=xor 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val=3 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.803 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.803 05:03:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.803 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val=software 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val=32 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val=32 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val=1 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val=Yes 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:52.804 05:03:30 -- accel/accel.sh@20 -- # val= 00:07:52.804 05:03:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # IFS=: 00:07:52.804 05:03:30 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.175 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.175 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.175 05:03:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.175 05:03:31 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:54.175 05:03:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.175 00:07:54.175 real 0m1.410s 00:07:54.175 user 0m1.269s 00:07:54.175 sys 0m0.147s 00:07:54.175 05:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.176 05:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.176 ************************************ 00:07:54.176 END TEST accel_xor 00:07:54.176 ************************************ 00:07:54.176 05:03:31 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:54.176 05:03:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:54.176 05:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.176 05:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.176 ************************************ 00:07:54.176 START TEST accel_dif_verify 00:07:54.176 ************************************ 00:07:54.176 05:03:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:54.176 05:03:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.176 05:03:31 -- accel/accel.sh@17 -- # local accel_module 00:07:54.176 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.176 05:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:54.176 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.176 05:03:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:54.176 05:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.176 05:03:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.176 05:03:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.176 05:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.176 05:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.176 05:03:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.176 05:03:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.176 05:03:31 -- accel/accel.sh@41 -- # jq -r . 00:07:54.176 [2024-04-24 05:03:31.360023] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:54.176 [2024-04-24 05:03:31.360086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776488 ] 00:07:54.176 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.176 [2024-04-24 05:03:31.391861] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.176 [2024-04-24 05:03:31.422054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.433 [2024-04-24 05:03:31.512667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.433 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.433 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.433 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.433 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.433 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.433 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=0x1 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=dif_verify 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=software 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@22 -- # accel_module=software 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=32 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=32 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=1 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val=No 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:54.434 05:03:31 -- accel/accel.sh@20 -- # val= 00:07:54.434 05:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # IFS=: 00:07:54.434 05:03:31 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@20 -- # val= 00:07:55.805 05:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.805 05:03:32 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:55.805 05:03:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.805 00:07:55.805 real 0m1.404s 00:07:55.805 user 0m1.261s 00:07:55.805 sys 0m0.146s 00:07:55.805 05:03:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.805 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.805 ************************************ 00:07:55.805 END TEST accel_dif_verify 00:07:55.805 ************************************ 00:07:55.805 05:03:32 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:55.805 05:03:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:55.805 05:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.805 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.805 ************************************ 00:07:55.805 START TEST accel_dif_generate 00:07:55.805 ************************************ 00:07:55.805 05:03:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:55.805 05:03:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.805 05:03:32 -- accel/accel.sh@17 -- # local accel_module 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # IFS=: 00:07:55.805 05:03:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:55.805 05:03:32 -- accel/accel.sh@19 -- # read -r var val 00:07:55.805 05:03:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:55.805 05:03:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.805 05:03:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.805 05:03:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.805 05:03:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.805 05:03:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.805 05:03:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.805 05:03:32 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.805 05:03:32 -- accel/accel.sh@41 -- # jq -r . 00:07:55.805 [2024-04-24 05:03:32.879223] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:55.805 [2024-04-24 05:03:32.879278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776691 ] 00:07:55.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.805 [2024-04-24 05:03:32.911566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.805 [2024-04-24 05:03:32.943732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.805 [2024-04-24 05:03:33.035017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=0x1 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=dif_generate 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=software 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@22 -- # accel_module=software 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=32 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=32 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=1 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val=No 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:56.072 05:03:33 -- accel/accel.sh@20 -- # val= 00:07:56.072 05:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # IFS=: 00:07:56.072 05:03:33 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.036 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.036 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.036 05:03:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.036 05:03:34 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:57.036 05:03:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.036 00:07:57.036 real 0m1.411s 00:07:57.036 user 0m1.271s 00:07:57.036 sys 0m0.144s 00:07:57.036 05:03:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.036 05:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:57.036 ************************************ 00:07:57.036 END TEST accel_dif_generate 00:07:57.036 ************************************ 00:07:57.036 05:03:34 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:57.036 05:03:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:57.036 05:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.036 05:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:57.294 ************************************ 00:07:57.294 START TEST accel_dif_generate_copy 00:07:57.295 ************************************ 00:07:57.295 05:03:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:57.295 05:03:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.295 05:03:34 -- accel/accel.sh@17 -- # local accel_module 00:07:57.295 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.295 05:03:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:57.295 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.295 05:03:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:57.295 05:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.295 05:03:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.295 05:03:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.295 05:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.295 05:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.295 05:03:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.295 05:03:34 -- accel/accel.sh@40 -- # local IFS=, 00:07:57.295 05:03:34 -- accel/accel.sh@41 -- # jq -r . 00:07:57.295 [2024-04-24 05:03:34.411095] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:57.295 [2024-04-24 05:03:34.411160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776929 ] 00:07:57.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.295 [2024-04-24 05:03:34.443128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.295 [2024-04-24 05:03:34.472908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.295 [2024-04-24 05:03:34.563913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=0x1 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=software 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@22 -- # accel_module=software 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=32 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=32 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=1 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val=No 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:57.553 05:03:34 -- accel/accel.sh@20 -- # val= 00:07:57.553 05:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # IFS=: 00:07:57.553 05:03:34 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.930 05:03:35 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:58.930 05:03:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.930 00:07:58.930 real 0m1.406s 00:07:58.930 user 0m1.264s 00:07:58.930 sys 0m0.144s 00:07:58.930 05:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.930 05:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.930 ************************************ 00:07:58.930 END TEST accel_dif_generate_copy 00:07:58.930 ************************************ 00:07:58.930 05:03:35 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:58.930 05:03:35 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.930 05:03:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:58.930 05:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.930 05:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.930 ************************************ 00:07:58.930 START TEST accel_comp 00:07:58.930 ************************************ 00:07:58.930 05:03:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.930 05:03:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.930 05:03:35 -- accel/accel.sh@17 -- # local accel_module 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.930 05:03:35 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.930 05:03:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.930 05:03:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.930 05:03:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.930 05:03:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.930 05:03:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.930 05:03:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.930 05:03:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:58.930 05:03:35 -- accel/accel.sh@41 -- # jq -r . 00:07:58.930 [2024-04-24 05:03:35.932449] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:07:58.930 [2024-04-24 05:03:35.932513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777102 ] 00:07:58.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.930 [2024-04-24 05:03:35.965217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.930 [2024-04-24 05:03:35.991699] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.930 [2024-04-24 05:03:36.080192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.930 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.930 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:36 -- accel/accel.sh@20 -- # val=0x1 00:07:58.930 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.930 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.930 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=compress 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=software 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@22 -- # accel_module=software 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=32 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=32 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=1 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val=No 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:07:58.931 05:03:36 -- accel/accel.sh@20 -- # val= 00:07:58.931 05:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # IFS=: 00:07:58.931 05:03:36 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.306 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.306 05:03:37 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:00.306 05:03:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.306 00:08:00.306 real 0m1.402s 00:08:00.306 user 0m1.269s 00:08:00.306 sys 0m0.136s 00:08:00.306 05:03:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.306 05:03:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.306 ************************************ 00:08:00.306 END TEST accel_comp 00:08:00.306 ************************************ 00:08:00.306 05:03:37 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.306 05:03:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:00.306 05:03:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.306 05:03:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.306 ************************************ 00:08:00.306 START TEST accel_decomp 00:08:00.306 ************************************ 00:08:00.306 05:03:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.306 05:03:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.306 05:03:37 -- accel/accel.sh@17 -- # local accel_module 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.306 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.306 05:03:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.306 05:03:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.306 05:03:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.306 05:03:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.306 05:03:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.306 05:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.306 05:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.306 05:03:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.306 05:03:37 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.306 05:03:37 -- accel/accel.sh@41 -- # jq -r . 00:08:00.306 [2024-04-24 05:03:37.450474] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:00.306 [2024-04-24 05:03:37.450538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777310 ] 00:08:00.306 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.306 [2024-04-24 05:03:37.486490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.306 [2024-04-24 05:03:37.516969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.564 [2024-04-24 05:03:37.608523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val=0x1 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val=decompress 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.564 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.564 05:03:37 -- accel/accel.sh@20 -- # val=software 00:08:00.564 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.564 05:03:37 -- accel/accel.sh@22 -- # accel_module=software 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val=32 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val=32 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val=1 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val=Yes 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:00.565 05:03:37 -- accel/accel.sh@20 -- # val= 00:08:00.565 05:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # IFS=: 00:08:00.565 05:03:37 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@20 -- # val= 00:08:01.936 05:03:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.936 05:03:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:01.936 05:03:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.936 00:08:01.936 real 0m1.414s 00:08:01.936 user 0m1.268s 00:08:01.936 sys 0m0.149s 00:08:01.936 05:03:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.936 05:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:01.936 ************************************ 00:08:01.936 END TEST accel_decomp 00:08:01.936 ************************************ 00:08:01.936 05:03:38 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.936 05:03:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:01.936 05:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.936 05:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:01.936 ************************************ 00:08:01.936 START TEST accel_decmop_full 00:08:01.936 ************************************ 00:08:01.936 05:03:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.936 05:03:38 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.936 05:03:38 -- accel/accel.sh@17 -- # local accel_module 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # IFS=: 00:08:01.936 05:03:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.936 05:03:38 -- accel/accel.sh@19 -- # read -r var val 00:08:01.936 05:03:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:01.936 05:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.936 05:03:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.936 05:03:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.936 05:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.936 05:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.936 05:03:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.936 05:03:38 -- accel/accel.sh@40 -- # local IFS=, 00:08:01.936 05:03:38 -- accel/accel.sh@41 -- # jq -r . 00:08:01.936 [2024-04-24 05:03:38.982779] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:01.936 [2024-04-24 05:03:38.982836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777542 ] 00:08:01.937 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.937 [2024-04-24 05:03:39.015665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.937 [2024-04-24 05:03:39.045385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.937 [2024-04-24 05:03:39.136342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=0x1 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=decompress 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=software 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=32 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=32 00:08:01.937 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:01.937 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:01.937 05:03:39 -- accel/accel.sh@20 -- # val=1 00:08:02.194 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:02.194 05:03:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.194 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:02.194 05:03:39 -- accel/accel.sh@20 -- # val=Yes 00:08:02.194 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:02.194 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:02.194 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:02.194 05:03:39 -- accel/accel.sh@20 -- # val= 00:08:02.194 05:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # IFS=: 00:08:02.194 05:03:39 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.126 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.126 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.126 05:03:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.126 05:03:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.126 05:03:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.126 00:08:03.126 real 0m1.423s 00:08:03.126 user 0m1.278s 00:08:03.126 sys 0m0.147s 00:08:03.126 05:03:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.126 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:08:03.126 ************************************ 00:08:03.126 END TEST accel_decmop_full 00:08:03.126 ************************************ 00:08:03.385 05:03:40 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.385 05:03:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:03.385 05:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.385 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:08:03.385 ************************************ 00:08:03.385 START TEST accel_decomp_mcore 00:08:03.385 ************************************ 00:08:03.385 05:03:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.385 05:03:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.385 05:03:40 -- accel/accel.sh@17 -- # local accel_module 00:08:03.385 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.385 05:03:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.385 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.385 05:03:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:03.385 05:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.385 05:03:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.385 05:03:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.385 05:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.385 05:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.385 05:03:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.385 05:03:40 -- accel/accel.sh@40 -- # local IFS=, 00:08:03.385 05:03:40 -- accel/accel.sh@41 -- # jq -r . 00:08:03.385 [2024-04-24 05:03:40.524446] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:03.385 [2024-04-24 05:03:40.524511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777704 ] 00:08:03.385 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.385 [2024-04-24 05:03:40.560899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.385 [2024-04-24 05:03:40.592142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.644 [2024-04-24 05:03:40.684172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.644 [2024-04-24 05:03:40.684230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.644 [2024-04-24 05:03:40.684295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.644 [2024-04-24 05:03:40.684298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=0xf 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=decompress 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=software 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@22 -- # accel_module=software 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=32 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=32 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=1 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val=Yes 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:03.644 05:03:40 -- accel/accel.sh@20 -- # val= 00:08:03.644 05:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # IFS=: 00:08:03.644 05:03:40 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:41 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.018 05:03:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:05.018 05:03:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.018 00:08:05.018 real 0m1.405s 00:08:05.018 user 0m4.675s 00:08:05.018 sys 0m0.149s 00:08:05.018 05:03:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.018 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:08:05.018 ************************************ 00:08:05.018 END TEST accel_decomp_mcore 00:08:05.018 ************************************ 00:08:05.018 05:03:41 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.018 05:03:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:05.018 05:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.018 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:08:05.018 ************************************ 00:08:05.018 START TEST accel_decomp_full_mcore 00:08:05.018 ************************************ 00:08:05.018 05:03:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.018 05:03:42 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.018 05:03:42 -- accel/accel.sh@17 -- # local accel_module 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.018 05:03:42 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.018 05:03:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.018 05:03:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.018 05:03:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.018 05:03:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.018 05:03:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.018 05:03:42 -- accel/accel.sh@40 -- # local IFS=, 00:08:05.018 05:03:42 -- accel/accel.sh@41 -- # jq -r . 00:08:05.018 [2024-04-24 05:03:42.049568] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:05.018 [2024-04-24 05:03:42.049640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777978 ] 00:08:05.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.018 [2024-04-24 05:03:42.085791] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.018 [2024-04-24 05:03:42.115915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.018 [2024-04-24 05:03:42.209807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.018 [2024-04-24 05:03:42.209861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.018 [2024-04-24 05:03:42.209927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.018 [2024-04-24 05:03:42.209930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=0xf 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=decompress 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=software 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@22 -- # accel_module=software 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=32 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=32 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=1 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.018 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.018 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.018 05:03:42 -- accel/accel.sh@20 -- # val=Yes 00:08:05.019 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.019 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.019 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:05.019 05:03:42 -- accel/accel.sh@20 -- # val= 00:08:05.019 05:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # IFS=: 00:08:05.019 05:03:42 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.397 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.397 05:03:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:06.397 05:03:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.397 00:08:06.397 real 0m1.434s 00:08:06.397 user 0m4.774s 00:08:06.397 sys 0m0.159s 00:08:06.397 05:03:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.397 05:03:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.397 ************************************ 00:08:06.397 END TEST accel_decomp_full_mcore 00:08:06.397 ************************************ 00:08:06.397 05:03:43 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:06.397 05:03:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:06.397 05:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.397 05:03:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.397 ************************************ 00:08:06.397 START TEST accel_decomp_mthread 00:08:06.397 ************************************ 00:08:06.397 05:03:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:06.397 05:03:43 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.397 05:03:43 -- accel/accel.sh@17 -- # local accel_module 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.397 05:03:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:06.397 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.397 05:03:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:06.397 05:03:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.397 05:03:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.397 05:03:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.397 05:03:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.397 05:03:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.397 05:03:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.397 05:03:43 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.397 05:03:43 -- accel/accel.sh@41 -- # jq -r . 00:08:06.397 [2024-04-24 05:03:43.605747] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:06.397 [2024-04-24 05:03:43.605805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778154 ] 00:08:06.397 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.397 [2024-04-24 05:03:43.638201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.656 [2024-04-24 05:03:43.668530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.656 [2024-04-24 05:03:43.760263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=0x1 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=decompress 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=software 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@22 -- # accel_module=software 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=32 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=32 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=2 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val=Yes 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:06.656 05:03:43 -- accel/accel.sh@20 -- # val= 00:08:06.656 05:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # IFS=: 00:08:06.656 05:03:43 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@20 -- # val= 00:08:08.030 05:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # IFS=: 00:08:08.030 05:03:44 -- accel/accel.sh@19 -- # read -r var val 00:08:08.030 05:03:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.030 05:03:44 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.030 05:03:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.030 00:08:08.030 real 0m1.410s 00:08:08.030 user 0m1.269s 00:08:08.030 sys 0m0.144s 00:08:08.030 05:03:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.030 05:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:08.030 ************************************ 00:08:08.030 END TEST accel_decomp_mthread 00:08:08.030 ************************************ 00:08:08.030 05:03:45 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.030 05:03:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:08.030 05:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.030 05:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 ************************************ 00:08:08.031 START TEST accel_deomp_full_mthread 00:08:08.031 ************************************ 00:08:08.031 05:03:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.031 05:03:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.031 05:03:45 -- accel/accel.sh@17 -- # local accel_module 00:08:08.031 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.031 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.031 05:03:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.031 05:03:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.031 05:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.031 05:03:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.031 05:03:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.031 05:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.031 05:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.031 05:03:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.031 05:03:45 -- accel/accel.sh@40 -- # local IFS=, 00:08:08.031 05:03:45 -- accel/accel.sh@41 -- # jq -r . 00:08:08.031 [2024-04-24 05:03:45.131454] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:08.031 [2024-04-24 05:03:45.131522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778324 ] 00:08:08.031 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.031 [2024-04-24 05:03:45.165647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.031 [2024-04-24 05:03:45.196680] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.031 [2024-04-24 05:03:45.285112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=0x1 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=decompress 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=software 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@22 -- # accel_module=software 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=32 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=32 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=2 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val=Yes 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:08.290 05:03:45 -- accel/accel.sh@20 -- # val= 00:08:08.290 05:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # IFS=: 00:08:08.290 05:03:45 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@20 -- # val= 00:08:09.699 05:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # IFS=: 00:08:09.699 05:03:46 -- accel/accel.sh@19 -- # read -r var val 00:08:09.699 05:03:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.700 05:03:46 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.700 05:03:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.700 00:08:09.700 real 0m1.431s 00:08:09.700 user 0m1.287s 00:08:09.700 sys 0m0.146s 00:08:09.700 05:03:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.700 05:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.700 ************************************ 00:08:09.700 END TEST accel_deomp_full_mthread 00:08:09.700 ************************************ 00:08:09.700 05:03:46 -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:09.700 05:03:46 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:09.700 05:03:46 -- accel/accel.sh@137 -- # build_accel_config 00:08:09.700 05:03:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:09.700 05:03:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.700 05:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.700 05:03:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.700 05:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.700 05:03:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.700 05:03:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.700 05:03:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.700 05:03:46 -- accel/accel.sh@40 -- # local IFS=, 00:08:09.700 05:03:46 -- accel/accel.sh@41 -- # jq -r . 00:08:09.700 ************************************ 00:08:09.700 START TEST accel_dif_functional_tests 00:08:09.700 ************************************ 00:08:09.700 05:03:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:09.700 [2024-04-24 05:03:46.698983] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:09.700 [2024-04-24 05:03:46.699057] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778604 ] 00:08:09.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.700 [2024-04-24 05:03:46.735620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.700 [2024-04-24 05:03:46.765851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.700 [2024-04-24 05:03:46.859865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.700 [2024-04-24 05:03:46.859920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.700 [2024-04-24 05:03:46.859923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.700 00:08:09.700 00:08:09.700 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.700 http://cunit.sourceforge.net/ 00:08:09.700 00:08:09.700 00:08:09.700 Suite: accel_dif 00:08:09.700 Test: verify: DIF generated, GUARD check ...passed 00:08:09.700 Test: verify: DIF generated, APPTAG check ...passed 00:08:09.700 Test: verify: DIF generated, REFTAG check ...passed 00:08:09.700 Test: verify: DIF not generated, GUARD check ...[2024-04-24 05:03:46.953116] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:09.700 [2024-04-24 05:03:46.953194] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:09.700 passed 00:08:09.700 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 05:03:46.953232] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:09.700 [2024-04-24 05:03:46.953258] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:09.700 passed 00:08:09.700 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 05:03:46.953289] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:09.700 [2024-04-24 05:03:46.953314] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:09.700 passed 00:08:09.700 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:09.700 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 05:03:46.953373] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:09.700 passed 00:08:09.700 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:09.700 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:09.700 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:09.700 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 05:03:46.953507] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:09.700 passed 00:08:09.700 Test: generate copy: DIF generated, GUARD check ...passed 00:08:09.700 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:09.700 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:09.700 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:09.700 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:09.700 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:09.700 Test: generate copy: iovecs-len validate ...[2024-04-24 05:03:46.953758] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:09.700 passed 00:08:09.700 Test: generate copy: buffer alignment validate ...passed 00:08:09.700 00:08:09.700 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.700 suites 1 1 n/a 0 0 00:08:09.700 tests 20 20 20 0 0 00:08:09.700 asserts 204 204 204 0 n/a 00:08:09.700 00:08:09.700 Elapsed time = 0.002 seconds 00:08:09.958 00:08:09.958 real 0m0.510s 00:08:09.958 user 0m0.747s 00:08:09.958 sys 0m0.184s 00:08:09.958 05:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.958 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.958 ************************************ 00:08:09.958 END TEST accel_dif_functional_tests 00:08:09.958 ************************************ 00:08:09.958 00:08:09.958 real 0m33.612s 00:08:09.958 user 0m35.819s 00:08:09.958 sys 0m5.482s 00:08:09.958 05:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.958 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.958 ************************************ 00:08:09.958 END TEST accel 00:08:09.958 ************************************ 00:08:09.958 05:03:47 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:09.958 05:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.958 05:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.958 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.217 ************************************ 00:08:10.217 START TEST accel_rpc 00:08:10.217 ************************************ 00:08:10.217 05:03:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:10.217 * Looking for test storage... 00:08:10.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:10.217 05:03:47 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.217 05:03:47 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1778684 00:08:10.217 05:03:47 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:10.217 05:03:47 -- accel/accel_rpc.sh@15 -- # waitforlisten 1778684 00:08:10.217 05:03:47 -- common/autotest_common.sh@817 -- # '[' -z 1778684 ']' 00:08:10.217 05:03:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.217 05:03:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:10.217 05:03:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.217 05:03:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:10.217 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.217 [2024-04-24 05:03:47.409540] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:10.217 [2024-04-24 05:03:47.409621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778684 ] 00:08:10.217 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.217 [2024-04-24 05:03:47.440054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.217 [2024-04-24 05:03:47.470173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.475 [2024-04-24 05:03:47.560589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.475 05:03:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:10.475 05:03:47 -- common/autotest_common.sh@850 -- # return 0 00:08:10.475 05:03:47 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:10.475 05:03:47 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:10.475 05:03:47 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:10.475 05:03:47 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:10.475 05:03:47 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:10.475 05:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.475 05:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.475 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.475 ************************************ 00:08:10.476 START TEST accel_assign_opcode 00:08:10.476 ************************************ 00:08:10.476 05:03:47 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:08:10.476 05:03:47 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:10.476 05:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.476 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.476 [2024-04-24 05:03:47.701418] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:10.476 05:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.476 05:03:47 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:10.476 05:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.476 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.476 [2024-04-24 05:03:47.709415] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:10.476 05:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.476 05:03:47 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:10.476 05:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.476 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.734 05:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.734 05:03:47 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:10.734 05:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.734 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.734 05:03:47 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:10.734 05:03:47 -- accel/accel_rpc.sh@42 -- # grep software 00:08:10.734 05:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.734 software 00:08:10.734 00:08:10.734 real 0m0.301s 00:08:10.734 user 0m0.043s 00:08:10.734 sys 0m0.005s 00:08:10.734 05:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.734 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.734 ************************************ 00:08:10.734 END TEST accel_assign_opcode 00:08:10.734 ************************************ 00:08:10.992 05:03:48 -- accel/accel_rpc.sh@55 -- # killprocess 1778684 00:08:10.992 05:03:48 -- common/autotest_common.sh@936 -- # '[' -z 1778684 ']' 00:08:10.992 05:03:48 -- common/autotest_common.sh@940 -- # kill -0 1778684 00:08:10.992 05:03:48 -- common/autotest_common.sh@941 -- # uname 00:08:10.992 05:03:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.993 05:03:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1778684 00:08:10.993 05:03:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.993 05:03:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.993 05:03:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1778684' 00:08:10.993 killing process with pid 1778684 00:08:10.993 05:03:48 -- common/autotest_common.sh@955 -- # kill 1778684 00:08:10.993 05:03:48 -- common/autotest_common.sh@960 -- # wait 1778684 00:08:11.251 00:08:11.251 real 0m1.154s 00:08:11.251 user 0m1.105s 00:08:11.251 sys 0m0.453s 00:08:11.251 05:03:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.251 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.251 ************************************ 00:08:11.251 END TEST accel_rpc 00:08:11.251 ************************************ 00:08:11.251 05:03:48 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:11.251 05:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.251 05:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.251 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.510 ************************************ 00:08:11.510 START TEST app_cmdline 00:08:11.510 ************************************ 00:08:11.510 05:03:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:11.510 * Looking for test storage... 00:08:11.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:11.510 05:03:48 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:11.510 05:03:48 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1778904 00:08:11.510 05:03:48 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:11.510 05:03:48 -- app/cmdline.sh@18 -- # waitforlisten 1778904 00:08:11.510 05:03:48 -- common/autotest_common.sh@817 -- # '[' -z 1778904 ']' 00:08:11.510 05:03:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.510 05:03:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:11.510 05:03:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.510 05:03:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:11.510 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.510 [2024-04-24 05:03:48.695489] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:11.510 [2024-04-24 05:03:48.695583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778904 ] 00:08:11.510 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.510 [2024-04-24 05:03:48.728000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.510 [2024-04-24 05:03:48.756460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.769 [2024-04-24 05:03:48.844713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.027 05:03:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:12.027 05:03:49 -- common/autotest_common.sh@850 -- # return 0 00:08:12.027 05:03:49 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:12.286 { 00:08:12.286 "version": "SPDK v24.05-pre git sha1 3f2c8979187", 00:08:12.286 "fields": { 00:08:12.286 "major": 24, 00:08:12.286 "minor": 5, 00:08:12.286 "patch": 0, 00:08:12.286 "suffix": "-pre", 00:08:12.286 "commit": "3f2c8979187" 00:08:12.286 } 00:08:12.286 } 00:08:12.286 05:03:49 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:12.286 05:03:49 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:12.286 05:03:49 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:12.286 05:03:49 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:12.286 05:03:49 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:12.286 05:03:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.286 05:03:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.286 05:03:49 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:12.286 05:03:49 -- app/cmdline.sh@26 -- # sort 00:08:12.286 05:03:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.286 05:03:49 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:12.286 05:03:49 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:12.286 05:03:49 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:12.286 05:03:49 -- common/autotest_common.sh@638 -- # local es=0 00:08:12.286 05:03:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:12.286 05:03:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.286 05:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.286 05:03:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.286 05:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.286 05:03:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.286 05:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.286 05:03:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.286 05:03:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:12.286 05:03:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:12.544 request: 00:08:12.544 { 00:08:12.544 "method": "env_dpdk_get_mem_stats", 00:08:12.545 "req_id": 1 00:08:12.545 } 00:08:12.545 Got JSON-RPC error response 00:08:12.545 response: 00:08:12.545 { 00:08:12.545 "code": -32601, 00:08:12.545 "message": "Method not found" 00:08:12.545 } 00:08:12.545 05:03:49 -- common/autotest_common.sh@641 -- # es=1 00:08:12.545 05:03:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:12.545 05:03:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:12.545 05:03:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:12.545 05:03:49 -- app/cmdline.sh@1 -- # killprocess 1778904 00:08:12.545 05:03:49 -- common/autotest_common.sh@936 -- # '[' -z 1778904 ']' 00:08:12.545 05:03:49 -- common/autotest_common.sh@940 -- # kill -0 1778904 00:08:12.545 05:03:49 -- common/autotest_common.sh@941 -- # uname 00:08:12.545 05:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:12.545 05:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1778904 00:08:12.545 05:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:12.545 05:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:12.545 05:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1778904' 00:08:12.545 killing process with pid 1778904 00:08:12.545 05:03:49 -- common/autotest_common.sh@955 -- # kill 1778904 00:08:12.545 05:03:49 -- common/autotest_common.sh@960 -- # wait 1778904 00:08:13.112 00:08:13.112 real 0m1.507s 00:08:13.112 user 0m1.860s 00:08:13.112 sys 0m0.473s 00:08:13.112 05:03:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.112 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 ************************************ 00:08:13.112 END TEST app_cmdline 00:08:13.112 ************************************ 00:08:13.112 05:03:50 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:13.112 05:03:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.112 05:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.112 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 ************************************ 00:08:13.112 START TEST version 00:08:13.112 ************************************ 00:08:13.112 05:03:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:13.112 * Looking for test storage... 00:08:13.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:13.112 05:03:50 -- app/version.sh@17 -- # get_header_version major 00:08:13.112 05:03:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:13.112 05:03:50 -- app/version.sh@14 -- # cut -f2 00:08:13.112 05:03:50 -- app/version.sh@14 -- # tr -d '"' 00:08:13.112 05:03:50 -- app/version.sh@17 -- # major=24 00:08:13.112 05:03:50 -- app/version.sh@18 -- # get_header_version minor 00:08:13.112 05:03:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:13.112 05:03:50 -- app/version.sh@14 -- # cut -f2 00:08:13.112 05:03:50 -- app/version.sh@14 -- # tr -d '"' 00:08:13.112 05:03:50 -- app/version.sh@18 -- # minor=5 00:08:13.112 05:03:50 -- app/version.sh@19 -- # get_header_version patch 00:08:13.112 05:03:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:13.112 05:03:50 -- app/version.sh@14 -- # cut -f2 00:08:13.112 05:03:50 -- app/version.sh@14 -- # tr -d '"' 00:08:13.112 05:03:50 -- app/version.sh@19 -- # patch=0 00:08:13.112 05:03:50 -- app/version.sh@20 -- # get_header_version suffix 00:08:13.112 05:03:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:13.112 05:03:50 -- app/version.sh@14 -- # cut -f2 00:08:13.112 05:03:50 -- app/version.sh@14 -- # tr -d '"' 00:08:13.112 05:03:50 -- app/version.sh@20 -- # suffix=-pre 00:08:13.112 05:03:50 -- app/version.sh@22 -- # version=24.5 00:08:13.112 05:03:50 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:13.112 05:03:50 -- app/version.sh@28 -- # version=24.5rc0 00:08:13.112 05:03:50 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:13.112 05:03:50 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:13.112 05:03:50 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:13.112 05:03:50 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:13.112 00:08:13.112 real 0m0.112s 00:08:13.112 user 0m0.058s 00:08:13.112 sys 0m0.076s 00:08:13.112 05:03:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.112 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 ************************************ 00:08:13.112 END TEST version 00:08:13.112 ************************************ 00:08:13.112 05:03:50 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:13.112 05:03:50 -- spdk/autotest.sh@194 -- # uname -s 00:08:13.112 05:03:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:13.112 05:03:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:13.112 05:03:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:13.112 05:03:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:13.112 05:03:50 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:13.112 05:03:50 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:13.112 05:03:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.112 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.371 05:03:50 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:13.371 05:03:50 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:13.371 05:03:50 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:08:13.371 05:03:50 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:08:13.371 05:03:50 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:08:13.371 05:03:50 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:08:13.371 05:03:50 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:13.371 05:03:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.371 05:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.371 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.371 ************************************ 00:08:13.371 START TEST nvmf_tcp 00:08:13.371 ************************************ 00:08:13.371 05:03:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:13.371 * Looking for test storage... 00:08:13.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:13.371 05:03:50 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:13.371 05:03:50 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:13.371 05:03:50 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.371 05:03:50 -- nvmf/common.sh@7 -- # uname -s 00:08:13.371 05:03:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.371 05:03:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.371 05:03:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.371 05:03:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.371 05:03:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.371 05:03:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.371 05:03:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.371 05:03:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.371 05:03:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.371 05:03:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.371 05:03:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.371 05:03:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.371 05:03:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.371 05:03:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.371 05:03:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.371 05:03:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.371 05:03:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.371 05:03:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.371 05:03:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.371 05:03:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.371 05:03:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.371 05:03:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.371 05:03:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.372 05:03:50 -- paths/export.sh@5 -- # export PATH 00:08:13.372 05:03:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.372 05:03:50 -- nvmf/common.sh@47 -- # : 0 00:08:13.372 05:03:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.372 05:03:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.372 05:03:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.372 05:03:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.372 05:03:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.372 05:03:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.372 05:03:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.372 05:03:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.372 05:03:50 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:13.372 05:03:50 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:13.372 05:03:50 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:13.372 05:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:13.372 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 05:03:50 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:13.372 05:03:50 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:13.372 05:03:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.372 05:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.372 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 ************************************ 00:08:13.372 START TEST nvmf_example 00:08:13.372 ************************************ 00:08:13.372 05:03:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:13.630 * Looking for test storage... 00:08:13.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.630 05:03:50 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.630 05:03:50 -- nvmf/common.sh@7 -- # uname -s 00:08:13.630 05:03:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.630 05:03:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.630 05:03:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.630 05:03:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.630 05:03:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.630 05:03:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.630 05:03:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.630 05:03:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.630 05:03:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.630 05:03:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.630 05:03:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.630 05:03:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.630 05:03:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.630 05:03:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.630 05:03:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.630 05:03:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.630 05:03:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.630 05:03:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.630 05:03:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.630 05:03:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.630 05:03:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.630 05:03:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.630 05:03:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.630 05:03:50 -- paths/export.sh@5 -- # export PATH 00:08:13.630 05:03:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.631 05:03:50 -- nvmf/common.sh@47 -- # : 0 00:08:13.631 05:03:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.631 05:03:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.631 05:03:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.631 05:03:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.631 05:03:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.631 05:03:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.631 05:03:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.631 05:03:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.631 05:03:50 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:13.631 05:03:50 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:13.631 05:03:50 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:13.631 05:03:50 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:13.631 05:03:50 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:13.631 05:03:50 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:13.631 05:03:50 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:13.631 05:03:50 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:13.631 05:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:13.631 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.631 05:03:50 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:13.631 05:03:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:13.631 05:03:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.631 05:03:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:13.631 05:03:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:13.631 05:03:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:13.631 05:03:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.631 05:03:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.631 05:03:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.631 05:03:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:13.631 05:03:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:13.631 05:03:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.631 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:15.531 05:03:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.531 05:03:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.531 05:03:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.531 05:03:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.531 05:03:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.531 05:03:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.531 05:03:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.531 05:03:52 -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.531 05:03:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.531 05:03:52 -- nvmf/common.sh@296 -- # e810=() 00:08:15.531 05:03:52 -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.531 05:03:52 -- nvmf/common.sh@297 -- # x722=() 00:08:15.531 05:03:52 -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.531 05:03:52 -- nvmf/common.sh@298 -- # mlx=() 00:08:15.531 05:03:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.531 05:03:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.531 05:03:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.531 05:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.531 05:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.531 05:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.531 05:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.531 05:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.531 05:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.531 05:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.531 05:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.531 05:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.531 05:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.531 05:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.531 05:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:15.531 05:03:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:15.531 05:03:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.531 05:03:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.531 05:03:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.531 05:03:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.531 05:03:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.531 05:03:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.531 05:03:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.531 05:03:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.531 05:03:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.531 05:03:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.531 05:03:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.531 05:03:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.531 05:03:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.531 05:03:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.531 05:03:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.531 05:03:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.531 05:03:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.531 05:03:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.531 05:03:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:08:15.531 00:08:15.531 --- 10.0.0.2 ping statistics --- 00:08:15.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.531 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:15.531 05:03:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:15.531 00:08:15.531 --- 10.0.0.1 ping statistics --- 00:08:15.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.531 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:15.531 05:03:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.531 05:03:52 -- nvmf/common.sh@411 -- # return 0 00:08:15.531 05:03:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:15.531 05:03:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.531 05:03:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:15.531 05:03:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.531 05:03:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:15.531 05:03:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:15.531 05:03:52 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:15.531 05:03:52 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:15.531 05:03:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:15.531 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.531 05:03:52 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:15.531 05:03:52 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:15.531 05:03:52 -- target/nvmf_example.sh@34 -- # nvmfpid=1780943 00:08:15.531 05:03:52 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.531 05:03:52 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:15.531 05:03:52 -- target/nvmf_example.sh@36 -- # waitforlisten 1780943 00:08:15.531 05:03:52 -- common/autotest_common.sh@817 -- # '[' -z 1780943 ']' 00:08:15.531 05:03:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.531 05:03:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:15.531 05:03:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.531 05:03:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:15.531 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.789 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.719 05:03:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:16.719 05:03:53 -- common/autotest_common.sh@850 -- # return 0 00:08:16.719 05:03:53 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:16.719 05:03:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.719 05:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.719 05:03:53 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.719 05:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.719 05:03:53 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:16.719 05:03:53 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.719 05:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.719 05:03:53 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:16.719 05:03:53 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.719 05:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.719 05:03:53 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.719 05:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.719 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.719 05:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.719 05:03:53 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:16.719 05:03:53 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:16.719 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.918 Initializing NVMe Controllers 00:08:28.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.918 Initialization complete. Launching workers. 00:08:28.918 ======================================================== 00:08:28.918 Latency(us) 00:08:28.918 Device Information : IOPS MiB/s Average min max 00:08:28.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14351.96 56.06 4459.02 881.97 15970.04 00:08:28.918 ======================================================== 00:08:28.918 Total : 14351.96 56.06 4459.02 881.97 15970.04 00:08:28.918 00:08:28.918 05:04:03 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:28.918 05:04:03 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:28.918 05:04:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:28.918 05:04:03 -- nvmf/common.sh@117 -- # sync 00:08:28.918 05:04:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.918 05:04:03 -- nvmf/common.sh@120 -- # set +e 00:08:28.918 05:04:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.918 05:04:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.918 rmmod nvme_tcp 00:08:28.918 rmmod nvme_fabrics 00:08:28.918 rmmod nvme_keyring 00:08:28.918 05:04:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.918 05:04:04 -- nvmf/common.sh@124 -- # set -e 00:08:28.918 05:04:04 -- nvmf/common.sh@125 -- # return 0 00:08:28.918 05:04:04 -- nvmf/common.sh@478 -- # '[' -n 1780943 ']' 00:08:28.918 05:04:04 -- nvmf/common.sh@479 -- # killprocess 1780943 00:08:28.918 05:04:04 -- common/autotest_common.sh@936 -- # '[' -z 1780943 ']' 00:08:28.918 05:04:04 -- common/autotest_common.sh@940 -- # kill -0 1780943 00:08:28.918 05:04:04 -- common/autotest_common.sh@941 -- # uname 00:08:28.918 05:04:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:28.918 05:04:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1780943 00:08:28.918 05:04:04 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:28.918 05:04:04 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:28.918 05:04:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1780943' 00:08:28.918 killing process with pid 1780943 00:08:28.918 05:04:04 -- common/autotest_common.sh@955 -- # kill 1780943 00:08:28.918 05:04:04 -- common/autotest_common.sh@960 -- # wait 1780943 00:08:28.918 nvmf threads initialize successfully 00:08:28.918 bdev subsystem init successfully 00:08:28.918 created a nvmf target service 00:08:28.918 create targets's poll groups done 00:08:28.918 all subsystems of target started 00:08:28.918 nvmf target is running 00:08:28.918 all subsystems of target stopped 00:08:28.918 destroy targets's poll groups done 00:08:28.918 destroyed the nvmf target service 00:08:28.918 bdev subsystem finish successfully 00:08:28.918 nvmf threads destroy successfully 00:08:28.918 05:04:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:28.918 05:04:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:28.918 05:04:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:28.918 05:04:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.918 05:04:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.918 05:04:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.918 05:04:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.918 05:04:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.194 05:04:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.194 05:04:06 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:29.194 05:04:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:29.194 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.194 00:08:29.194 real 0m15.714s 00:08:29.194 user 0m40.906s 00:08:29.194 sys 0m4.623s 00:08:29.194 05:04:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:29.194 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.194 ************************************ 00:08:29.194 END TEST nvmf_example 00:08:29.194 ************************************ 00:08:29.194 05:04:06 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.194 05:04:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.194 05:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.194 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.454 ************************************ 00:08:29.454 START TEST nvmf_filesystem 00:08:29.454 ************************************ 00:08:29.454 05:04:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.454 * Looking for test storage... 00:08:29.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.455 05:04:06 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:29.455 05:04:06 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:29.455 05:04:06 -- common/autotest_common.sh@34 -- # set -e 00:08:29.455 05:04:06 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:29.455 05:04:06 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:29.455 05:04:06 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:29.455 05:04:06 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:29.455 05:04:06 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:29.455 05:04:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:29.455 05:04:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:29.455 05:04:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:29.455 05:04:06 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:29.455 05:04:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:29.455 05:04:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:29.455 05:04:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:29.455 05:04:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:29.455 05:04:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:29.455 05:04:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:29.455 05:04:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:29.455 05:04:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:29.455 05:04:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:29.455 05:04:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:29.455 05:04:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:29.455 05:04:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:29.455 05:04:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.455 05:04:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:29.455 05:04:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:29.455 05:04:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:29.455 05:04:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:29.455 05:04:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:29.455 05:04:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:29.455 05:04:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:29.455 05:04:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:29.455 05:04:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:29.455 05:04:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:29.455 05:04:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:29.455 05:04:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:29.455 05:04:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:29.455 05:04:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:29.455 05:04:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:29.455 05:04:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:29.455 05:04:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:29.455 05:04:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:29.455 05:04:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:29.455 05:04:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:29.455 05:04:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:29.455 05:04:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:29.455 05:04:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:29.455 05:04:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:29.455 05:04:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:29.455 05:04:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:29.455 05:04:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:29.455 05:04:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:29.455 05:04:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:29.455 05:04:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:29.455 05:04:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:29.455 05:04:06 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:29.455 05:04:06 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:29.455 05:04:06 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:29.455 05:04:06 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:29.455 05:04:06 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:29.455 05:04:06 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:29.455 05:04:06 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:29.455 05:04:06 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:29.455 05:04:06 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:29.455 05:04:06 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:29.455 05:04:06 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:29.455 05:04:06 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:29.455 05:04:06 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:29.455 05:04:06 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:29.455 05:04:06 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:29.455 05:04:06 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:29.455 05:04:06 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:29.455 05:04:06 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:29.455 05:04:06 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:29.455 05:04:06 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:29.455 05:04:06 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:29.455 05:04:06 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:29.455 05:04:06 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:29.455 05:04:06 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:29.455 05:04:06 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:29.455 05:04:06 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:29.455 05:04:06 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:29.455 05:04:06 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:29.455 05:04:06 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:29.455 05:04:06 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.455 05:04:06 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.455 05:04:06 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.455 05:04:06 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.455 05:04:06 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.455 05:04:06 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.455 05:04:06 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:29.455 05:04:06 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.455 05:04:06 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:29.455 05:04:06 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:29.455 05:04:06 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:29.455 05:04:06 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:29.455 05:04:06 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:29.455 05:04:06 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:29.455 05:04:06 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:29.455 05:04:06 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:29.455 #define SPDK_CONFIG_H 00:08:29.455 #define SPDK_CONFIG_APPS 1 00:08:29.455 #define SPDK_CONFIG_ARCH native 00:08:29.455 #undef SPDK_CONFIG_ASAN 00:08:29.455 #undef SPDK_CONFIG_AVAHI 00:08:29.455 #undef SPDK_CONFIG_CET 00:08:29.455 #define SPDK_CONFIG_COVERAGE 1 00:08:29.455 #define SPDK_CONFIG_CROSS_PREFIX 00:08:29.455 #undef SPDK_CONFIG_CRYPTO 00:08:29.455 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:29.455 #undef SPDK_CONFIG_CUSTOMOCF 00:08:29.455 #undef SPDK_CONFIG_DAOS 00:08:29.455 #define SPDK_CONFIG_DAOS_DIR 00:08:29.455 #define SPDK_CONFIG_DEBUG 1 00:08:29.455 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:29.455 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:29.455 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:29.455 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:29.455 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:29.455 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.455 #define SPDK_CONFIG_EXAMPLES 1 00:08:29.455 #undef SPDK_CONFIG_FC 00:08:29.455 #define SPDK_CONFIG_FC_PATH 00:08:29.455 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:29.455 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:29.455 #undef SPDK_CONFIG_FUSE 00:08:29.455 #undef SPDK_CONFIG_FUZZER 00:08:29.455 #define SPDK_CONFIG_FUZZER_LIB 00:08:29.455 #undef SPDK_CONFIG_GOLANG 00:08:29.455 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:29.455 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:29.455 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:29.455 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:29.455 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:29.455 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:29.455 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:29.455 #define SPDK_CONFIG_IDXD 1 00:08:29.455 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:29.455 #undef SPDK_CONFIG_IPSEC_MB 00:08:29.455 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:29.455 #define SPDK_CONFIG_ISAL 1 00:08:29.455 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:29.455 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:29.455 #define SPDK_CONFIG_LIBDIR 00:08:29.455 #undef SPDK_CONFIG_LTO 00:08:29.455 #define SPDK_CONFIG_MAX_LCORES 00:08:29.455 #define SPDK_CONFIG_NVME_CUSE 1 00:08:29.455 #undef SPDK_CONFIG_OCF 00:08:29.455 #define SPDK_CONFIG_OCF_PATH 00:08:29.455 #define SPDK_CONFIG_OPENSSL_PATH 00:08:29.455 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:29.455 #define SPDK_CONFIG_PGO_DIR 00:08:29.456 #undef SPDK_CONFIG_PGO_USE 00:08:29.456 #define SPDK_CONFIG_PREFIX /usr/local 00:08:29.456 #undef SPDK_CONFIG_RAID5F 00:08:29.456 #undef SPDK_CONFIG_RBD 00:08:29.456 #define SPDK_CONFIG_RDMA 1 00:08:29.456 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:29.456 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:29.456 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:29.456 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:29.456 #define SPDK_CONFIG_SHARED 1 00:08:29.456 #undef SPDK_CONFIG_SMA 00:08:29.456 #define SPDK_CONFIG_TESTS 1 00:08:29.456 #undef SPDK_CONFIG_TSAN 00:08:29.456 #define SPDK_CONFIG_UBLK 1 00:08:29.456 #define SPDK_CONFIG_UBSAN 1 00:08:29.456 #undef SPDK_CONFIG_UNIT_TESTS 00:08:29.456 #undef SPDK_CONFIG_URING 00:08:29.456 #define SPDK_CONFIG_URING_PATH 00:08:29.456 #undef SPDK_CONFIG_URING_ZNS 00:08:29.456 #undef SPDK_CONFIG_USDT 00:08:29.456 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:29.456 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:29.456 #define SPDK_CONFIG_VFIO_USER 1 00:08:29.456 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:29.456 #define SPDK_CONFIG_VHOST 1 00:08:29.456 #define SPDK_CONFIG_VIRTIO 1 00:08:29.456 #undef SPDK_CONFIG_VTUNE 00:08:29.456 #define SPDK_CONFIG_VTUNE_DIR 00:08:29.456 #define SPDK_CONFIG_WERROR 1 00:08:29.456 #define SPDK_CONFIG_WPDK_DIR 00:08:29.456 #undef SPDK_CONFIG_XNVME 00:08:29.456 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:29.456 05:04:06 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:29.456 05:04:06 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.456 05:04:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.456 05:04:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.456 05:04:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.456 05:04:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.456 05:04:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.456 05:04:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.456 05:04:06 -- paths/export.sh@5 -- # export PATH 00:08:29.456 05:04:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.456 05:04:06 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.456 05:04:06 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.456 05:04:06 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.456 05:04:06 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.456 05:04:06 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:29.456 05:04:06 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.456 05:04:06 -- pm/common@67 -- # TEST_TAG=N/A 00:08:29.456 05:04:06 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:29.456 05:04:06 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:29.456 05:04:06 -- pm/common@71 -- # uname -s 00:08:29.456 05:04:06 -- pm/common@71 -- # PM_OS=Linux 00:08:29.456 05:04:06 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:29.456 05:04:06 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:29.456 05:04:06 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:29.456 05:04:06 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:29.456 05:04:06 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:29.456 05:04:06 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:29.456 05:04:06 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:29.456 05:04:06 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:29.456 05:04:06 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:29.456 05:04:06 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:29.456 05:04:06 -- common/autotest_common.sh@57 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:29.456 05:04:06 -- common/autotest_common.sh@61 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:29.456 05:04:06 -- common/autotest_common.sh@63 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:29.456 05:04:06 -- common/autotest_common.sh@65 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:29.456 05:04:06 -- common/autotest_common.sh@67 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:29.456 05:04:06 -- common/autotest_common.sh@69 -- # : 00:08:29.456 05:04:06 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:29.456 05:04:06 -- common/autotest_common.sh@71 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:29.456 05:04:06 -- common/autotest_common.sh@73 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:29.456 05:04:06 -- common/autotest_common.sh@75 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:29.456 05:04:06 -- common/autotest_common.sh@77 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:29.456 05:04:06 -- common/autotest_common.sh@79 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:29.456 05:04:06 -- common/autotest_common.sh@81 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:29.456 05:04:06 -- common/autotest_common.sh@83 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:29.456 05:04:06 -- common/autotest_common.sh@85 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:29.456 05:04:06 -- common/autotest_common.sh@87 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:29.456 05:04:06 -- common/autotest_common.sh@89 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:29.456 05:04:06 -- common/autotest_common.sh@91 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:29.456 05:04:06 -- common/autotest_common.sh@93 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:29.456 05:04:06 -- common/autotest_common.sh@95 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:29.456 05:04:06 -- common/autotest_common.sh@97 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:29.456 05:04:06 -- common/autotest_common.sh@99 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:29.456 05:04:06 -- common/autotest_common.sh@101 -- # : tcp 00:08:29.456 05:04:06 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:29.456 05:04:06 -- common/autotest_common.sh@103 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:29.456 05:04:06 -- common/autotest_common.sh@105 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:29.456 05:04:06 -- common/autotest_common.sh@107 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:29.456 05:04:06 -- common/autotest_common.sh@109 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:29.456 05:04:06 -- common/autotest_common.sh@111 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:29.456 05:04:06 -- common/autotest_common.sh@113 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:29.456 05:04:06 -- common/autotest_common.sh@115 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:29.456 05:04:06 -- common/autotest_common.sh@117 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:29.456 05:04:06 -- common/autotest_common.sh@119 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:29.456 05:04:06 -- common/autotest_common.sh@121 -- # : 1 00:08:29.456 05:04:06 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:29.456 05:04:06 -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:29.456 05:04:06 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:29.456 05:04:06 -- common/autotest_common.sh@125 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:29.456 05:04:06 -- common/autotest_common.sh@127 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:29.456 05:04:06 -- common/autotest_common.sh@129 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:29.456 05:04:06 -- common/autotest_common.sh@131 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:29.456 05:04:06 -- common/autotest_common.sh@133 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:29.456 05:04:06 -- common/autotest_common.sh@135 -- # : 0 00:08:29.456 05:04:06 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:29.457 05:04:06 -- common/autotest_common.sh@137 -- # : main 00:08:29.457 05:04:06 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:29.457 05:04:06 -- common/autotest_common.sh@139 -- # : true 00:08:29.457 05:04:06 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:29.457 05:04:06 -- common/autotest_common.sh@141 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:29.457 05:04:06 -- common/autotest_common.sh@143 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:29.457 05:04:06 -- common/autotest_common.sh@145 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:29.457 05:04:06 -- common/autotest_common.sh@147 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:29.457 05:04:06 -- common/autotest_common.sh@149 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:29.457 05:04:06 -- common/autotest_common.sh@151 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:29.457 05:04:06 -- common/autotest_common.sh@153 -- # : e810 00:08:29.457 05:04:06 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:29.457 05:04:06 -- common/autotest_common.sh@155 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:29.457 05:04:06 -- common/autotest_common.sh@157 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:29.457 05:04:06 -- common/autotest_common.sh@159 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:29.457 05:04:06 -- common/autotest_common.sh@161 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:29.457 05:04:06 -- common/autotest_common.sh@163 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:29.457 05:04:06 -- common/autotest_common.sh@166 -- # : 00:08:29.457 05:04:06 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:29.457 05:04:06 -- common/autotest_common.sh@168 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:29.457 05:04:06 -- common/autotest_common.sh@170 -- # : 0 00:08:29.457 05:04:06 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:29.457 05:04:06 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.457 05:04:06 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.457 05:04:06 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.457 05:04:06 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:29.457 05:04:06 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:29.457 05:04:06 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.457 05:04:06 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.457 05:04:06 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.457 05:04:06 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.457 05:04:06 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:29.457 05:04:06 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:29.457 05:04:06 -- common/autotest_common.sh@199 -- # cat 00:08:29.457 05:04:06 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:29.457 05:04:06 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.457 05:04:06 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.457 05:04:06 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.457 05:04:06 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.457 05:04:06 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:29.457 05:04:06 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:29.457 05:04:06 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.457 05:04:06 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.457 05:04:06 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.457 05:04:06 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.457 05:04:06 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.457 05:04:06 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.457 05:04:06 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.457 05:04:06 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.457 05:04:06 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.457 05:04:06 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.457 05:04:06 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:29.457 05:04:06 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:29.457 05:04:06 -- common/autotest_common.sh@252 -- # valgrind= 00:08:29.457 05:04:06 -- common/autotest_common.sh@258 -- # uname -s 00:08:29.457 05:04:06 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:29.457 05:04:06 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:29.457 05:04:06 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:29.457 05:04:06 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:29.457 05:04:06 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:29.457 05:04:06 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:29.457 05:04:06 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:08:29.457 05:04:06 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:29.457 05:04:06 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:29.457 05:04:06 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:29.457 05:04:06 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:29.457 05:04:06 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:29.457 05:04:06 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:29.457 05:04:06 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:29.457 05:04:06 -- common/autotest_common.sh@307 -- # [[ -z 1782664 ]] 00:08:29.457 05:04:06 -- common/autotest_common.sh@307 -- # kill -0 1782664 00:08:29.457 05:04:06 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:29.457 05:04:06 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:29.457 05:04:06 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:29.457 05:04:06 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:29.457 05:04:06 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:29.457 05:04:06 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:29.457 05:04:06 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:29.457 05:04:06 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:29.457 05:04:06 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.yxEGMI 00:08:29.457 05:04:06 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:29.458 05:04:06 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yxEGMI/tests/target /tmp/spdk.yxEGMI 00:08:29.458 05:04:06 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@316 -- # df -T 00:08:29.458 05:04:06 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=50689945600 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=11304763392 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996074496 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390182912 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=8761344 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996807680 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=548864 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:08:29.458 05:04:06 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:08:29.458 05:04:06 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:29.458 05:04:06 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:29.458 05:04:06 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:29.458 * Looking for test storage... 00:08:29.458 05:04:06 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:29.458 05:04:06 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:29.458 05:04:06 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.458 05:04:06 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:29.458 05:04:06 -- common/autotest_common.sh@361 -- # mount=/ 00:08:29.458 05:04:06 -- common/autotest_common.sh@363 -- # target_space=50689945600 00:08:29.458 05:04:06 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:29.458 05:04:06 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:29.458 05:04:06 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@370 -- # new_size=13519355904 00:08:29.458 05:04:06 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:29.458 05:04:06 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.458 05:04:06 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.458 05:04:06 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.458 05:04:06 -- common/autotest_common.sh@378 -- # return 0 00:08:29.458 05:04:06 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:29.458 05:04:06 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:29.458 05:04:06 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:29.458 05:04:06 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:29.458 05:04:06 -- common/autotest_common.sh@1673 -- # true 00:08:29.458 05:04:06 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:29.458 05:04:06 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:29.458 05:04:06 -- common/autotest_common.sh@27 -- # exec 00:08:29.458 05:04:06 -- common/autotest_common.sh@29 -- # exec 00:08:29.458 05:04:06 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:29.458 05:04:06 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:29.458 05:04:06 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:29.458 05:04:06 -- common/autotest_common.sh@18 -- # set -x 00:08:29.458 05:04:06 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.458 05:04:06 -- nvmf/common.sh@7 -- # uname -s 00:08:29.458 05:04:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.458 05:04:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.458 05:04:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.458 05:04:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.458 05:04:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.458 05:04:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.458 05:04:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.458 05:04:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.458 05:04:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.458 05:04:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.458 05:04:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.458 05:04:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.458 05:04:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.458 05:04:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.458 05:04:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.458 05:04:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.458 05:04:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.458 05:04:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.458 05:04:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.458 05:04:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.458 05:04:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.458 05:04:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.458 05:04:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.458 05:04:06 -- paths/export.sh@5 -- # export PATH 00:08:29.458 05:04:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.458 05:04:06 -- nvmf/common.sh@47 -- # : 0 00:08:29.458 05:04:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.458 05:04:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.458 05:04:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.458 05:04:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.458 05:04:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.458 05:04:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.458 05:04:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.458 05:04:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.458 05:04:06 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:29.458 05:04:06 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:29.459 05:04:06 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:29.459 05:04:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:29.459 05:04:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.459 05:04:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:29.459 05:04:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:29.459 05:04:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:29.459 05:04:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.459 05:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.459 05:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.459 05:04:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:29.459 05:04:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:29.459 05:04:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.459 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:31.360 05:04:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:31.360 05:04:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.360 05:04:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.360 05:04:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.360 05:04:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.360 05:04:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.360 05:04:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.360 05:04:08 -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.360 05:04:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.360 05:04:08 -- nvmf/common.sh@296 -- # e810=() 00:08:31.360 05:04:08 -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.360 05:04:08 -- nvmf/common.sh@297 -- # x722=() 00:08:31.360 05:04:08 -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.360 05:04:08 -- nvmf/common.sh@298 -- # mlx=() 00:08:31.360 05:04:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.360 05:04:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.360 05:04:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.620 05:04:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.620 05:04:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.620 05:04:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.620 05:04:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.620 05:04:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.620 05:04:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.620 05:04:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.620 05:04:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.620 05:04:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.620 05:04:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.620 05:04:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.620 05:04:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.620 05:04:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.620 05:04:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.620 05:04:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.620 05:04:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.620 05:04:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:31.620 05:04:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:31.620 05:04:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.620 05:04:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.620 05:04:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.620 05:04:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.620 05:04:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.620 05:04:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.620 05:04:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.620 05:04:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.620 05:04:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.620 05:04:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.620 05:04:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.620 05:04:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.620 05:04:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.620 05:04:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.620 05:04:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.620 05:04:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.620 05:04:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.620 05:04:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.620 05:04:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:31.620 00:08:31.620 --- 10.0.0.2 ping statistics --- 00:08:31.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.620 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:31.620 05:04:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:31.620 00:08:31.620 --- 10.0.0.1 ping statistics --- 00:08:31.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.620 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:31.620 05:04:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.620 05:04:08 -- nvmf/common.sh@411 -- # return 0 00:08:31.620 05:04:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:31.620 05:04:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.620 05:04:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:31.620 05:04:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.620 05:04:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:31.620 05:04:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:31.620 05:04:08 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:31.620 05:04:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.620 05:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.620 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.879 ************************************ 00:08:31.879 START TEST nvmf_filesystem_no_in_capsule 00:08:31.879 ************************************ 00:08:31.879 05:04:08 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:31.879 05:04:08 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:31.879 05:04:08 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:31.879 05:04:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:31.879 05:04:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:31.879 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.879 05:04:08 -- nvmf/common.sh@470 -- # nvmfpid=1784295 00:08:31.880 05:04:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.880 05:04:08 -- nvmf/common.sh@471 -- # waitforlisten 1784295 00:08:31.880 05:04:08 -- common/autotest_common.sh@817 -- # '[' -z 1784295 ']' 00:08:31.880 05:04:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.880 05:04:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:31.880 05:04:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.880 05:04:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:31.880 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.880 [2024-04-24 05:04:08.959394] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:31.880 [2024-04-24 05:04:08.959476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.880 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.880 [2024-04-24 05:04:08.997004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.880 [2024-04-24 05:04:09.029049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.880 [2024-04-24 05:04:09.119793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.880 [2024-04-24 05:04:09.119848] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.880 [2024-04-24 05:04:09.119876] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.880 [2024-04-24 05:04:09.119890] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.880 [2024-04-24 05:04:09.119900] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.880 [2024-04-24 05:04:09.120001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.880 [2024-04-24 05:04:09.120052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.880 [2024-04-24 05:04:09.120166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.880 [2024-04-24 05:04:09.120168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.138 05:04:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:32.138 05:04:09 -- common/autotest_common.sh@850 -- # return 0 00:08:32.138 05:04:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:32.138 05:04:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:32.138 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.138 05:04:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.138 05:04:09 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.138 05:04:09 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.138 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.138 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.138 [2024-04-24 05:04:09.270301] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.138 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.138 05:04:09 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.138 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.138 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.396 Malloc1 00:08:32.396 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.396 05:04:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.396 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.396 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.396 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.396 05:04:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.396 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.396 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.396 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.396 05:04:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.396 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.396 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.396 [2024-04-24 05:04:09.456168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.396 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.396 05:04:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:32.396 05:04:09 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:32.396 05:04:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:32.396 05:04:09 -- common/autotest_common.sh@1366 -- # local bs 00:08:32.396 05:04:09 -- common/autotest_common.sh@1367 -- # local nb 00:08:32.396 05:04:09 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:32.396 05:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.396 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.396 05:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.396 05:04:09 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:32.396 { 00:08:32.396 "name": "Malloc1", 00:08:32.396 "aliases": [ 00:08:32.396 "1c0a4b9e-dab0-48f4-b431-f69292355d50" 00:08:32.396 ], 00:08:32.396 "product_name": "Malloc disk", 00:08:32.396 "block_size": 512, 00:08:32.396 "num_blocks": 1048576, 00:08:32.396 "uuid": "1c0a4b9e-dab0-48f4-b431-f69292355d50", 00:08:32.396 "assigned_rate_limits": { 00:08:32.396 "rw_ios_per_sec": 0, 00:08:32.396 "rw_mbytes_per_sec": 0, 00:08:32.396 "r_mbytes_per_sec": 0, 00:08:32.396 "w_mbytes_per_sec": 0 00:08:32.396 }, 00:08:32.396 "claimed": true, 00:08:32.396 "claim_type": "exclusive_write", 00:08:32.396 "zoned": false, 00:08:32.396 "supported_io_types": { 00:08:32.396 "read": true, 00:08:32.396 "write": true, 00:08:32.396 "unmap": true, 00:08:32.396 "write_zeroes": true, 00:08:32.396 "flush": true, 00:08:32.396 "reset": true, 00:08:32.396 "compare": false, 00:08:32.396 "compare_and_write": false, 00:08:32.396 "abort": true, 00:08:32.396 "nvme_admin": false, 00:08:32.396 "nvme_io": false 00:08:32.396 }, 00:08:32.396 "memory_domains": [ 00:08:32.396 { 00:08:32.396 "dma_device_id": "system", 00:08:32.396 "dma_device_type": 1 00:08:32.396 }, 00:08:32.396 { 00:08:32.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.396 "dma_device_type": 2 00:08:32.396 } 00:08:32.396 ], 00:08:32.396 "driver_specific": {} 00:08:32.396 } 00:08:32.396 ]' 00:08:32.396 05:04:09 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:32.396 05:04:09 -- common/autotest_common.sh@1369 -- # bs=512 00:08:32.396 05:04:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:32.396 05:04:09 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:32.396 05:04:09 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:32.396 05:04:09 -- common/autotest_common.sh@1374 -- # echo 512 00:08:32.396 05:04:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:32.396 05:04:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.961 05:04:10 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.961 05:04:10 -- common/autotest_common.sh@1184 -- # local i=0 00:08:32.961 05:04:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.961 05:04:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:32.961 05:04:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:35.484 05:04:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:35.484 05:04:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:35.484 05:04:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.484 05:04:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:35.484 05:04:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.484 05:04:12 -- common/autotest_common.sh@1194 -- # return 0 00:08:35.484 05:04:12 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:35.484 05:04:12 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:35.484 05:04:12 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:35.484 05:04:12 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:35.484 05:04:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.484 05:04:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.484 05:04:12 -- setup/common.sh@80 -- # echo 536870912 00:08:35.484 05:04:12 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:35.484 05:04:12 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:35.484 05:04:12 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:35.484 05:04:12 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:35.484 05:04:12 -- target/filesystem.sh@69 -- # partprobe 00:08:35.741 05:04:12 -- target/filesystem.sh@70 -- # sleep 1 00:08:37.112 05:04:13 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:37.112 05:04:13 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:37.112 05:04:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:37.112 05:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.112 05:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:37.112 ************************************ 00:08:37.112 START TEST filesystem_ext4 00:08:37.112 ************************************ 00:08:37.112 05:04:14 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:37.112 05:04:14 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:37.112 05:04:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.112 05:04:14 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:37.112 05:04:14 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:37.112 05:04:14 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:37.112 05:04:14 -- common/autotest_common.sh@914 -- # local i=0 00:08:37.112 05:04:14 -- common/autotest_common.sh@915 -- # local force 00:08:37.112 05:04:14 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:37.112 05:04:14 -- common/autotest_common.sh@918 -- # force=-F 00:08:37.112 05:04:14 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:37.112 mke2fs 1.46.5 (30-Dec-2021) 00:08:37.112 Discarding device blocks: 0/522240 done 00:08:37.112 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:37.112 Filesystem UUID: dcb8a80e-7bd8-48cb-97f4-28eff5668b19 00:08:37.112 Superblock backups stored on blocks: 00:08:37.112 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:37.112 00:08:37.112 Allocating group tables: 0/64 done 00:08:37.112 Writing inode tables: 0/64 done 00:08:40.389 Creating journal (8192 blocks): done 00:08:40.953 Writing superblocks and filesystem accounting information: 0/64 done 00:08:40.953 00:08:40.953 05:04:17 -- common/autotest_common.sh@931 -- # return 0 00:08:40.953 05:04:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.953 05:04:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.210 05:04:18 -- target/filesystem.sh@25 -- # sync 00:08:41.210 05:04:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.210 05:04:18 -- target/filesystem.sh@27 -- # sync 00:08:41.210 05:04:18 -- target/filesystem.sh@29 -- # i=0 00:08:41.210 05:04:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.210 05:04:18 -- target/filesystem.sh@37 -- # kill -0 1784295 00:08:41.210 05:04:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.210 05:04:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.210 05:04:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.210 05:04:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.210 00:08:41.210 real 0m4.272s 00:08:41.210 user 0m0.020s 00:08:41.210 sys 0m0.057s 00:08:41.210 05:04:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.210 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:08:41.210 ************************************ 00:08:41.210 END TEST filesystem_ext4 00:08:41.210 ************************************ 00:08:41.210 05:04:18 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:41.210 05:04:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:41.210 05:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.210 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:08:41.210 ************************************ 00:08:41.210 START TEST filesystem_btrfs 00:08:41.210 ************************************ 00:08:41.210 05:04:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:41.210 05:04:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:41.210 05:04:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.210 05:04:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:41.210 05:04:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:41.210 05:04:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:41.210 05:04:18 -- common/autotest_common.sh@914 -- # local i=0 00:08:41.210 05:04:18 -- common/autotest_common.sh@915 -- # local force 00:08:41.211 05:04:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:41.211 05:04:18 -- common/autotest_common.sh@920 -- # force=-f 00:08:41.211 05:04:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:41.775 btrfs-progs v6.6.2 00:08:41.775 See https://btrfs.readthedocs.io for more information. 00:08:41.775 00:08:41.775 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:41.775 NOTE: several default settings have changed in version 5.15, please make sure 00:08:41.775 this does not affect your deployments: 00:08:41.775 - DUP for metadata (-m dup) 00:08:41.775 - enabled no-holes (-O no-holes) 00:08:41.775 - enabled free-space-tree (-R free-space-tree) 00:08:41.775 00:08:41.775 Label: (null) 00:08:41.775 UUID: df9797be-ce71-4755-980e-11cfdf58fea6 00:08:41.775 Node size: 16384 00:08:41.775 Sector size: 4096 00:08:41.775 Filesystem size: 510.00MiB 00:08:41.775 Block group profiles: 00:08:41.775 Data: single 8.00MiB 00:08:41.775 Metadata: DUP 32.00MiB 00:08:41.775 System: DUP 8.00MiB 00:08:41.775 SSD detected: yes 00:08:41.775 Zoned device: no 00:08:41.775 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:41.775 Runtime features: free-space-tree 00:08:41.775 Checksum: crc32c 00:08:41.775 Number of devices: 1 00:08:41.775 Devices: 00:08:41.775 ID SIZE PATH 00:08:41.775 1 510.00MiB /dev/nvme0n1p1 00:08:41.775 00:08:41.775 05:04:18 -- common/autotest_common.sh@931 -- # return 0 00:08:41.775 05:04:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.032 05:04:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.032 05:04:19 -- target/filesystem.sh@25 -- # sync 00:08:42.032 05:04:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.032 05:04:19 -- target/filesystem.sh@27 -- # sync 00:08:42.032 05:04:19 -- target/filesystem.sh@29 -- # i=0 00:08:42.032 05:04:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.032 05:04:19 -- target/filesystem.sh@37 -- # kill -0 1784295 00:08:42.032 05:04:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.032 05:04:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.032 05:04:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.032 05:04:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.032 00:08:42.032 real 0m0.743s 00:08:42.032 user 0m0.022s 00:08:42.032 sys 0m0.110s 00:08:42.032 05:04:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:42.032 05:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.032 ************************************ 00:08:42.032 END TEST filesystem_btrfs 00:08:42.032 ************************************ 00:08:42.032 05:04:19 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:42.032 05:04:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:42.032 05:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.032 05:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.032 ************************************ 00:08:42.032 START TEST filesystem_xfs 00:08:42.032 ************************************ 00:08:42.032 05:04:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:42.032 05:04:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:42.032 05:04:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.032 05:04:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:42.032 05:04:19 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:42.032 05:04:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:42.032 05:04:19 -- common/autotest_common.sh@914 -- # local i=0 00:08:42.032 05:04:19 -- common/autotest_common.sh@915 -- # local force 00:08:42.032 05:04:19 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:42.032 05:04:19 -- common/autotest_common.sh@920 -- # force=-f 00:08:42.032 05:04:19 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:42.290 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:42.290 = sectsz=512 attr=2, projid32bit=1 00:08:42.290 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:42.290 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:42.290 data = bsize=4096 blocks=130560, imaxpct=25 00:08:42.290 = sunit=0 swidth=0 blks 00:08:42.290 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:42.290 log =internal log bsize=4096 blocks=16384, version=2 00:08:42.290 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:42.290 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:42.859 Discarding blocks...Done. 00:08:42.859 05:04:20 -- common/autotest_common.sh@931 -- # return 0 00:08:42.859 05:04:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:45.386 05:04:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:45.386 05:04:22 -- target/filesystem.sh@25 -- # sync 00:08:45.386 05:04:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:45.386 05:04:22 -- target/filesystem.sh@27 -- # sync 00:08:45.386 05:04:22 -- target/filesystem.sh@29 -- # i=0 00:08:45.386 05:04:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:45.386 05:04:22 -- target/filesystem.sh@37 -- # kill -0 1784295 00:08:45.386 05:04:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:45.386 05:04:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:45.386 05:04:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:45.387 05:04:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:45.387 00:08:45.387 real 0m3.236s 00:08:45.387 user 0m0.009s 00:08:45.387 sys 0m0.062s 00:08:45.387 05:04:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.387 05:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.387 ************************************ 00:08:45.387 END TEST filesystem_xfs 00:08:45.387 ************************************ 00:08:45.387 05:04:22 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:45.643 05:04:22 -- target/filesystem.sh@93 -- # sync 00:08:45.643 05:04:22 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.901 05:04:22 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.901 05:04:22 -- common/autotest_common.sh@1205 -- # local i=0 00:08:45.901 05:04:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:45.901 05:04:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.901 05:04:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:45.901 05:04:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.901 05:04:22 -- common/autotest_common.sh@1217 -- # return 0 00:08:45.901 05:04:22 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.901 05:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.901 05:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.901 05:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.901 05:04:22 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:45.901 05:04:22 -- target/filesystem.sh@101 -- # killprocess 1784295 00:08:45.901 05:04:22 -- common/autotest_common.sh@936 -- # '[' -z 1784295 ']' 00:08:45.901 05:04:22 -- common/autotest_common.sh@940 -- # kill -0 1784295 00:08:45.901 05:04:22 -- common/autotest_common.sh@941 -- # uname 00:08:45.901 05:04:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.901 05:04:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1784295 00:08:45.901 05:04:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.901 05:04:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.901 05:04:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1784295' 00:08:45.901 killing process with pid 1784295 00:08:45.901 05:04:23 -- common/autotest_common.sh@955 -- # kill 1784295 00:08:45.901 05:04:23 -- common/autotest_common.sh@960 -- # wait 1784295 00:08:46.469 05:04:23 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:46.469 00:08:46.469 real 0m14.538s 00:08:46.469 user 0m55.991s 00:08:46.469 sys 0m2.144s 00:08:46.469 05:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.469 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 ************************************ 00:08:46.469 END TEST nvmf_filesystem_no_in_capsule 00:08:46.469 ************************************ 00:08:46.469 05:04:23 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:46.469 05:04:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:46.469 05:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.469 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 ************************************ 00:08:46.469 START TEST nvmf_filesystem_in_capsule 00:08:46.469 ************************************ 00:08:46.469 05:04:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:46.469 05:04:23 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:46.469 05:04:23 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:46.469 05:04:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:46.469 05:04:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:46.469 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 05:04:23 -- nvmf/common.sh@470 -- # nvmfpid=1786288 00:08:46.469 05:04:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.469 05:04:23 -- nvmf/common.sh@471 -- # waitforlisten 1786288 00:08:46.469 05:04:23 -- common/autotest_common.sh@817 -- # '[' -z 1786288 ']' 00:08:46.469 05:04:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.469 05:04:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:46.469 05:04:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.469 05:04:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:46.469 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 [2024-04-24 05:04:23.621080] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:08:46.469 [2024-04-24 05:04:23.621156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.469 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.469 [2024-04-24 05:04:23.657344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.469 [2024-04-24 05:04:23.684344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.729 [2024-04-24 05:04:23.769884] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.729 [2024-04-24 05:04:23.769946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.729 [2024-04-24 05:04:23.769963] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.729 [2024-04-24 05:04:23.769976] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.729 [2024-04-24 05:04:23.769988] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.729 [2024-04-24 05:04:23.770074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.729 [2024-04-24 05:04:23.770145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.729 [2024-04-24 05:04:23.770243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.729 [2024-04-24 05:04:23.770251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.729 05:04:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:46.729 05:04:23 -- common/autotest_common.sh@850 -- # return 0 00:08:46.729 05:04:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:46.729 05:04:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.729 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.729 05:04:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.729 05:04:23 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:46.729 05:04:23 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:46.729 05:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.729 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.730 [2024-04-24 05:04:23.921491] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.730 05:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.730 05:04:23 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:46.730 05:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.730 05:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.988 Malloc1 00:08:46.988 05:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.988 05:04:24 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.988 05:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.988 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.988 05:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.988 05:04:24 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:46.988 05:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.988 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.988 05:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.988 05:04:24 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.988 05:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.988 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.988 [2024-04-24 05:04:24.098948] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.988 05:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.988 05:04:24 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:46.988 05:04:24 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:46.988 05:04:24 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:46.988 05:04:24 -- common/autotest_common.sh@1366 -- # local bs 00:08:46.988 05:04:24 -- common/autotest_common.sh@1367 -- # local nb 00:08:46.988 05:04:24 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:46.988 05:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.988 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.988 05:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.988 05:04:24 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:46.988 { 00:08:46.988 "name": "Malloc1", 00:08:46.988 "aliases": [ 00:08:46.988 "ad56745d-fcf5-4001-a357-853083d4e196" 00:08:46.988 ], 00:08:46.988 "product_name": "Malloc disk", 00:08:46.988 "block_size": 512, 00:08:46.988 "num_blocks": 1048576, 00:08:46.988 "uuid": "ad56745d-fcf5-4001-a357-853083d4e196", 00:08:46.988 "assigned_rate_limits": { 00:08:46.988 "rw_ios_per_sec": 0, 00:08:46.988 "rw_mbytes_per_sec": 0, 00:08:46.988 "r_mbytes_per_sec": 0, 00:08:46.988 "w_mbytes_per_sec": 0 00:08:46.988 }, 00:08:46.988 "claimed": true, 00:08:46.988 "claim_type": "exclusive_write", 00:08:46.988 "zoned": false, 00:08:46.988 "supported_io_types": { 00:08:46.988 "read": true, 00:08:46.988 "write": true, 00:08:46.988 "unmap": true, 00:08:46.988 "write_zeroes": true, 00:08:46.988 "flush": true, 00:08:46.988 "reset": true, 00:08:46.988 "compare": false, 00:08:46.988 "compare_and_write": false, 00:08:46.988 "abort": true, 00:08:46.988 "nvme_admin": false, 00:08:46.988 "nvme_io": false 00:08:46.988 }, 00:08:46.988 "memory_domains": [ 00:08:46.988 { 00:08:46.988 "dma_device_id": "system", 00:08:46.988 "dma_device_type": 1 00:08:46.988 }, 00:08:46.988 { 00:08:46.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.988 "dma_device_type": 2 00:08:46.988 } 00:08:46.988 ], 00:08:46.988 "driver_specific": {} 00:08:46.988 } 00:08:46.988 ]' 00:08:46.988 05:04:24 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:46.988 05:04:24 -- common/autotest_common.sh@1369 -- # bs=512 00:08:46.988 05:04:24 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:46.988 05:04:24 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:46.988 05:04:24 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:46.988 05:04:24 -- common/autotest_common.sh@1374 -- # echo 512 00:08:46.988 05:04:24 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:46.988 05:04:24 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.927 05:04:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.927 05:04:24 -- common/autotest_common.sh@1184 -- # local i=0 00:08:47.927 05:04:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.927 05:04:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:47.927 05:04:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:49.823 05:04:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:49.823 05:04:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:49.823 05:04:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.823 05:04:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:49.823 05:04:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.823 05:04:26 -- common/autotest_common.sh@1194 -- # return 0 00:08:49.823 05:04:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:49.823 05:04:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:49.823 05:04:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:49.823 05:04:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:49.823 05:04:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:49.823 05:04:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:49.823 05:04:26 -- setup/common.sh@80 -- # echo 536870912 00:08:49.823 05:04:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:49.823 05:04:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:49.823 05:04:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:49.823 05:04:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:50.081 05:04:27 -- target/filesystem.sh@69 -- # partprobe 00:08:50.646 05:04:27 -- target/filesystem.sh@70 -- # sleep 1 00:08:51.578 05:04:28 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:51.578 05:04:28 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:51.578 05:04:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:51.578 05:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.578 05:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:51.578 ************************************ 00:08:51.578 START TEST filesystem_in_capsule_ext4 00:08:51.578 ************************************ 00:08:51.578 05:04:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:51.578 05:04:28 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:51.578 05:04:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:51.578 05:04:28 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:51.578 05:04:28 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:51.578 05:04:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:51.578 05:04:28 -- common/autotest_common.sh@914 -- # local i=0 00:08:51.578 05:04:28 -- common/autotest_common.sh@915 -- # local force 00:08:51.578 05:04:28 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:51.578 05:04:28 -- common/autotest_common.sh@918 -- # force=-F 00:08:51.578 05:04:28 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:51.578 mke2fs 1.46.5 (30-Dec-2021) 00:08:51.578 Discarding device blocks: 0/522240 done 00:08:51.578 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:51.578 Filesystem UUID: c3ba2e34-ebbf-4ac8-bf7b-bb2c0a1e3ab6 00:08:51.578 Superblock backups stored on blocks: 00:08:51.578 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:51.578 00:08:51.578 Allocating group tables: 0/64 done 00:08:51.578 Writing inode tables: 0/64 done 00:08:51.835 Creating journal (8192 blocks): done 00:08:52.093 Writing superblocks and filesystem accounting information: 0/64 done 00:08:52.093 00:08:52.093 05:04:29 -- common/autotest_common.sh@931 -- # return 0 00:08:52.093 05:04:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.026 05:04:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.026 05:04:30 -- target/filesystem.sh@25 -- # sync 00:08:53.026 05:04:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.026 05:04:30 -- target/filesystem.sh@27 -- # sync 00:08:53.026 05:04:30 -- target/filesystem.sh@29 -- # i=0 00:08:53.026 05:04:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.026 05:04:30 -- target/filesystem.sh@37 -- # kill -0 1786288 00:08:53.026 05:04:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.026 05:04:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.026 05:04:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.026 05:04:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.026 00:08:53.026 real 0m1.477s 00:08:53.026 user 0m0.020s 00:08:53.026 sys 0m0.059s 00:08:53.026 05:04:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.026 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:53.026 ************************************ 00:08:53.026 END TEST filesystem_in_capsule_ext4 00:08:53.026 ************************************ 00:08:53.026 05:04:30 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:53.026 05:04:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:53.026 05:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.026 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:53.284 ************************************ 00:08:53.284 START TEST filesystem_in_capsule_btrfs 00:08:53.284 ************************************ 00:08:53.284 05:04:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:53.284 05:04:30 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:53.284 05:04:30 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.284 05:04:30 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:53.284 05:04:30 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:53.284 05:04:30 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:53.284 05:04:30 -- common/autotest_common.sh@914 -- # local i=0 00:08:53.284 05:04:30 -- common/autotest_common.sh@915 -- # local force 00:08:53.284 05:04:30 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:53.284 05:04:30 -- common/autotest_common.sh@920 -- # force=-f 00:08:53.284 05:04:30 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:53.542 btrfs-progs v6.6.2 00:08:53.542 See https://btrfs.readthedocs.io for more information. 00:08:53.542 00:08:53.542 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:53.542 NOTE: several default settings have changed in version 5.15, please make sure 00:08:53.542 this does not affect your deployments: 00:08:53.542 - DUP for metadata (-m dup) 00:08:53.542 - enabled no-holes (-O no-holes) 00:08:53.542 - enabled free-space-tree (-R free-space-tree) 00:08:53.542 00:08:53.542 Label: (null) 00:08:53.542 UUID: 1da1012e-fcf5-41a6-a6bd-1137ee8cfb43 00:08:53.542 Node size: 16384 00:08:53.542 Sector size: 4096 00:08:53.542 Filesystem size: 510.00MiB 00:08:53.542 Block group profiles: 00:08:53.542 Data: single 8.00MiB 00:08:53.542 Metadata: DUP 32.00MiB 00:08:53.542 System: DUP 8.00MiB 00:08:53.542 SSD detected: yes 00:08:53.542 Zoned device: no 00:08:53.542 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:53.542 Runtime features: free-space-tree 00:08:53.542 Checksum: crc32c 00:08:53.542 Number of devices: 1 00:08:53.542 Devices: 00:08:53.542 ID SIZE PATH 00:08:53.542 1 510.00MiB /dev/nvme0n1p1 00:08:53.542 00:08:53.542 05:04:30 -- common/autotest_common.sh@931 -- # return 0 00:08:53.542 05:04:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:54.477 05:04:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:54.477 05:04:31 -- target/filesystem.sh@25 -- # sync 00:08:54.477 05:04:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:54.477 05:04:31 -- target/filesystem.sh@27 -- # sync 00:08:54.477 05:04:31 -- target/filesystem.sh@29 -- # i=0 00:08:54.477 05:04:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:54.477 05:04:31 -- target/filesystem.sh@37 -- # kill -0 1786288 00:08:54.477 05:04:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:54.477 05:04:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:54.477 05:04:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:54.477 05:04:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:54.477 00:08:54.477 real 0m1.111s 00:08:54.477 user 0m0.014s 00:08:54.477 sys 0m0.118s 00:08:54.477 05:04:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:54.477 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:54.477 ************************************ 00:08:54.477 END TEST filesystem_in_capsule_btrfs 00:08:54.477 ************************************ 00:08:54.477 05:04:31 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:54.477 05:04:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:54.477 05:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.477 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:54.477 ************************************ 00:08:54.477 START TEST filesystem_in_capsule_xfs 00:08:54.477 ************************************ 00:08:54.477 05:04:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:54.477 05:04:31 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:54.477 05:04:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:54.477 05:04:31 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:54.477 05:04:31 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:54.477 05:04:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:54.477 05:04:31 -- common/autotest_common.sh@914 -- # local i=0 00:08:54.477 05:04:31 -- common/autotest_common.sh@915 -- # local force 00:08:54.477 05:04:31 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:54.477 05:04:31 -- common/autotest_common.sh@920 -- # force=-f 00:08:54.477 05:04:31 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:54.477 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:54.477 = sectsz=512 attr=2, projid32bit=1 00:08:54.477 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:54.477 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:54.477 data = bsize=4096 blocks=130560, imaxpct=25 00:08:54.477 = sunit=0 swidth=0 blks 00:08:54.477 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:54.477 log =internal log bsize=4096 blocks=16384, version=2 00:08:54.477 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:54.477 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:55.409 Discarding blocks...Done. 00:08:55.409 05:04:32 -- common/autotest_common.sh@931 -- # return 0 00:08:55.409 05:04:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.343 05:04:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.343 05:04:34 -- target/filesystem.sh@25 -- # sync 00:08:57.343 05:04:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.343 05:04:34 -- target/filesystem.sh@27 -- # sync 00:08:57.343 05:04:34 -- target/filesystem.sh@29 -- # i=0 00:08:57.343 05:04:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.343 05:04:34 -- target/filesystem.sh@37 -- # kill -0 1786288 00:08:57.343 05:04:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.343 05:04:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.343 05:04:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.343 05:04:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.343 00:08:57.343 real 0m2.730s 00:08:57.343 user 0m0.011s 00:08:57.343 sys 0m0.072s 00:08:57.343 05:04:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:57.343 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:57.343 ************************************ 00:08:57.343 END TEST filesystem_in_capsule_xfs 00:08:57.343 ************************************ 00:08:57.343 05:04:34 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:57.343 05:04:34 -- target/filesystem.sh@93 -- # sync 00:08:57.343 05:04:34 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.601 05:04:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.601 05:04:34 -- common/autotest_common.sh@1205 -- # local i=0 00:08:57.601 05:04:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:57.601 05:04:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.601 05:04:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:57.601 05:04:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.601 05:04:34 -- common/autotest_common.sh@1217 -- # return 0 00:08:57.601 05:04:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.601 05:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.601 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:57.601 05:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.601 05:04:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:57.601 05:04:34 -- target/filesystem.sh@101 -- # killprocess 1786288 00:08:57.601 05:04:34 -- common/autotest_common.sh@936 -- # '[' -z 1786288 ']' 00:08:57.601 05:04:34 -- common/autotest_common.sh@940 -- # kill -0 1786288 00:08:57.601 05:04:34 -- common/autotest_common.sh@941 -- # uname 00:08:57.602 05:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:57.602 05:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1786288 00:08:57.602 05:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:57.602 05:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:57.602 05:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1786288' 00:08:57.602 killing process with pid 1786288 00:08:57.602 05:04:34 -- common/autotest_common.sh@955 -- # kill 1786288 00:08:57.602 05:04:34 -- common/autotest_common.sh@960 -- # wait 1786288 00:08:58.167 05:04:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:58.167 00:08:58.167 real 0m11.701s 00:08:58.167 user 0m44.937s 00:08:58.167 sys 0m1.933s 00:08:58.167 05:04:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.167 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:08:58.167 ************************************ 00:08:58.167 END TEST nvmf_filesystem_in_capsule 00:08:58.167 ************************************ 00:08:58.167 05:04:35 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:58.167 05:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:58.167 05:04:35 -- nvmf/common.sh@117 -- # sync 00:08:58.167 05:04:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.167 05:04:35 -- nvmf/common.sh@120 -- # set +e 00:08:58.167 05:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.167 05:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.167 rmmod nvme_tcp 00:08:58.167 rmmod nvme_fabrics 00:08:58.167 rmmod nvme_keyring 00:08:58.167 05:04:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.167 05:04:35 -- nvmf/common.sh@124 -- # set -e 00:08:58.167 05:04:35 -- nvmf/common.sh@125 -- # return 0 00:08:58.167 05:04:35 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:58.167 05:04:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:58.167 05:04:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:58.167 05:04:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:58.167 05:04:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.167 05:04:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.167 05:04:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.167 05:04:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.167 05:04:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.700 05:04:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.700 00:09:00.700 real 0m30.914s 00:09:00.700 user 1m41.889s 00:09:00.700 sys 0m5.771s 00:09:00.700 05:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.700 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.700 ************************************ 00:09:00.700 END TEST nvmf_filesystem 00:09:00.700 ************************************ 00:09:00.700 05:04:37 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:00.700 05:04:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.700 05:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.700 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.700 ************************************ 00:09:00.700 START TEST nvmf_discovery 00:09:00.700 ************************************ 00:09:00.700 05:04:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:00.700 * Looking for test storage... 00:09:00.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.700 05:04:37 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.700 05:04:37 -- nvmf/common.sh@7 -- # uname -s 00:09:00.700 05:04:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.700 05:04:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.700 05:04:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.700 05:04:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.700 05:04:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.700 05:04:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.700 05:04:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.700 05:04:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.700 05:04:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.700 05:04:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.700 05:04:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.700 05:04:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.700 05:04:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.700 05:04:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.700 05:04:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.700 05:04:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.700 05:04:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.700 05:04:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.700 05:04:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.700 05:04:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.700 05:04:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.700 05:04:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.700 05:04:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.700 05:04:37 -- paths/export.sh@5 -- # export PATH 00:09:00.700 05:04:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.700 05:04:37 -- nvmf/common.sh@47 -- # : 0 00:09:00.700 05:04:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.700 05:04:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.700 05:04:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.700 05:04:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.700 05:04:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.700 05:04:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.700 05:04:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.700 05:04:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.700 05:04:37 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:00.700 05:04:37 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:00.700 05:04:37 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:00.700 05:04:37 -- target/discovery.sh@15 -- # hash nvme 00:09:00.700 05:04:37 -- target/discovery.sh@20 -- # nvmftestinit 00:09:00.700 05:04:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:00.700 05:04:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.700 05:04:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:00.700 05:04:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:00.700 05:04:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:00.700 05:04:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.700 05:04:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.700 05:04:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.700 05:04:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:00.700 05:04:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:00.700 05:04:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.700 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:09:02.601 05:04:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:02.601 05:04:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.601 05:04:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.601 05:04:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.601 05:04:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.601 05:04:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.601 05:04:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.601 05:04:39 -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.601 05:04:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.601 05:04:39 -- nvmf/common.sh@296 -- # e810=() 00:09:02.601 05:04:39 -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.601 05:04:39 -- nvmf/common.sh@297 -- # x722=() 00:09:02.601 05:04:39 -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.601 05:04:39 -- nvmf/common.sh@298 -- # mlx=() 00:09:02.601 05:04:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.601 05:04:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.601 05:04:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.601 05:04:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.601 05:04:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.601 05:04:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.601 05:04:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.601 05:04:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.601 05:04:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.601 05:04:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.602 05:04:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.602 05:04:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.602 05:04:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.602 05:04:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.602 05:04:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.602 05:04:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.602 05:04:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.602 05:04:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.602 05:04:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.602 05:04:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.602 05:04:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.602 05:04:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.602 05:04:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.602 05:04:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.602 05:04:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.602 05:04:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:02.602 05:04:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:02.602 05:04:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:02.602 05:04:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.602 05:04:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.602 05:04:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.602 05:04:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.602 05:04:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.602 05:04:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.602 05:04:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.602 05:04:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.602 05:04:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.602 05:04:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.602 05:04:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.602 05:04:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.602 05:04:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.602 05:04:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.602 05:04:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.602 05:04:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.602 05:04:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.602 05:04:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.602 05:04:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.602 05:04:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:09:02.602 00:09:02.602 --- 10.0.0.2 ping statistics --- 00:09:02.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.602 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:09:02.602 05:04:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:09:02.602 00:09:02.602 --- 10.0.0.1 ping statistics --- 00:09:02.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.602 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:02.602 05:04:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.602 05:04:39 -- nvmf/common.sh@411 -- # return 0 00:09:02.602 05:04:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:02.602 05:04:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.602 05:04:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:02.602 05:04:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.602 05:04:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:02.602 05:04:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:02.602 05:04:39 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:02.602 05:04:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:02.602 05:04:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:02.602 05:04:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 05:04:39 -- nvmf/common.sh@470 -- # nvmfpid=1789794 00:09:02.602 05:04:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.602 05:04:39 -- nvmf/common.sh@471 -- # waitforlisten 1789794 00:09:02.602 05:04:39 -- common/autotest_common.sh@817 -- # '[' -z 1789794 ']' 00:09:02.602 05:04:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.602 05:04:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:02.602 05:04:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.602 05:04:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:02.602 05:04:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 [2024-04-24 05:04:39.733419] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:09:02.602 [2024-04-24 05:04:39.733500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.602 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.602 [2024-04-24 05:04:39.772547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.602 [2024-04-24 05:04:39.799952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.861 [2024-04-24 05:04:39.890412] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.861 [2024-04-24 05:04:39.890470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.861 [2024-04-24 05:04:39.890484] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.861 [2024-04-24 05:04:39.890496] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.861 [2024-04-24 05:04:39.890506] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.861 [2024-04-24 05:04:39.890562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.861 [2024-04-24 05:04:39.890620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.861 [2024-04-24 05:04:39.890686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.861 [2024-04-24 05:04:39.890690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.861 05:04:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:02.861 05:04:40 -- common/autotest_common.sh@850 -- # return 0 00:09:02.861 05:04:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:02.861 05:04:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.861 05:04:40 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 [2024-04-24 05:04:40.049431] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@26 -- # seq 1 4 00:09:02.861 05:04:40 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.861 05:04:40 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 Null1 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 [2024-04-24 05:04:40.089740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.861 05:04:40 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 Null2 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:02.861 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.861 05:04:40 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.861 05:04:40 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:02.861 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.861 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 Null3 00:09:03.118 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.118 05:04:40 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:03.118 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.118 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.118 05:04:40 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:03.118 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.118 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.118 05:04:40 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:03.119 05:04:40 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 Null4 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:03.119 00:09:03.119 Discovery Log Number of Records 6, Generation counter 6 00:09:03.119 =====Discovery Log Entry 0====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: current discovery subsystem 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4420 00:09:03.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: explicit discovery connections, duplicate discovery information 00:09:03.119 sectype: none 00:09:03.119 =====Discovery Log Entry 1====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: nvme subsystem 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4420 00:09:03.119 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: none 00:09:03.119 sectype: none 00:09:03.119 =====Discovery Log Entry 2====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: nvme subsystem 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4420 00:09:03.119 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: none 00:09:03.119 sectype: none 00:09:03.119 =====Discovery Log Entry 3====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: nvme subsystem 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4420 00:09:03.119 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: none 00:09:03.119 sectype: none 00:09:03.119 =====Discovery Log Entry 4====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: nvme subsystem 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4420 00:09:03.119 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: none 00:09:03.119 sectype: none 00:09:03.119 =====Discovery Log Entry 5====== 00:09:03.119 trtype: tcp 00:09:03.119 adrfam: ipv4 00:09:03.119 subtype: discovery subsystem referral 00:09:03.119 treq: not required 00:09:03.119 portid: 0 00:09:03.119 trsvcid: 4430 00:09:03.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:03.119 traddr: 10.0.0.2 00:09:03.119 eflags: none 00:09:03.119 sectype: none 00:09:03.119 05:04:40 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:03.119 Perform nvmf subsystem discovery via RPC 00:09:03.119 05:04:40 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.119 [2024-04-24 05:04:40.370441] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:03.119 [ 00:09:03.119 { 00:09:03.119 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:03.119 "subtype": "Discovery", 00:09:03.119 "listen_addresses": [ 00:09:03.119 { 00:09:03.119 "transport": "TCP", 00:09:03.119 "trtype": "TCP", 00:09:03.119 "adrfam": "IPv4", 00:09:03.119 "traddr": "10.0.0.2", 00:09:03.119 "trsvcid": "4420" 00:09:03.119 } 00:09:03.119 ], 00:09:03.119 "allow_any_host": true, 00:09:03.119 "hosts": [] 00:09:03.119 }, 00:09:03.119 { 00:09:03.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.119 "subtype": "NVMe", 00:09:03.119 "listen_addresses": [ 00:09:03.119 { 00:09:03.119 "transport": "TCP", 00:09:03.119 "trtype": "TCP", 00:09:03.119 "adrfam": "IPv4", 00:09:03.119 "traddr": "10.0.0.2", 00:09:03.119 "trsvcid": "4420" 00:09:03.119 } 00:09:03.119 ], 00:09:03.119 "allow_any_host": true, 00:09:03.119 "hosts": [], 00:09:03.119 "serial_number": "SPDK00000000000001", 00:09:03.119 "model_number": "SPDK bdev Controller", 00:09:03.119 "max_namespaces": 32, 00:09:03.119 "min_cntlid": 1, 00:09:03.119 "max_cntlid": 65519, 00:09:03.119 "namespaces": [ 00:09:03.119 { 00:09:03.119 "nsid": 1, 00:09:03.119 "bdev_name": "Null1", 00:09:03.119 "name": "Null1", 00:09:03.119 "nguid": "9D9E1F91DE684EA1A252C484331366DE", 00:09:03.119 "uuid": "9d9e1f91-de68-4ea1-a252-c484331366de" 00:09:03.119 } 00:09:03.119 ] 00:09:03.119 }, 00:09:03.119 { 00:09:03.119 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:03.119 "subtype": "NVMe", 00:09:03.119 "listen_addresses": [ 00:09:03.119 { 00:09:03.119 "transport": "TCP", 00:09:03.119 "trtype": "TCP", 00:09:03.119 "adrfam": "IPv4", 00:09:03.119 "traddr": "10.0.0.2", 00:09:03.119 "trsvcid": "4420" 00:09:03.119 } 00:09:03.119 ], 00:09:03.119 "allow_any_host": true, 00:09:03.119 "hosts": [], 00:09:03.119 "serial_number": "SPDK00000000000002", 00:09:03.119 "model_number": "SPDK bdev Controller", 00:09:03.119 "max_namespaces": 32, 00:09:03.119 "min_cntlid": 1, 00:09:03.119 "max_cntlid": 65519, 00:09:03.119 "namespaces": [ 00:09:03.119 { 00:09:03.119 "nsid": 1, 00:09:03.119 "bdev_name": "Null2", 00:09:03.119 "name": "Null2", 00:09:03.119 "nguid": "4C35F9745813430E972D352A34C2BFC5", 00:09:03.119 "uuid": "4c35f974-5813-430e-972d-352a34c2bfc5" 00:09:03.119 } 00:09:03.119 ] 00:09:03.119 }, 00:09:03.119 { 00:09:03.119 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:03.119 "subtype": "NVMe", 00:09:03.119 "listen_addresses": [ 00:09:03.119 { 00:09:03.119 "transport": "TCP", 00:09:03.119 "trtype": "TCP", 00:09:03.119 "adrfam": "IPv4", 00:09:03.119 "traddr": "10.0.0.2", 00:09:03.119 "trsvcid": "4420" 00:09:03.119 } 00:09:03.119 ], 00:09:03.119 "allow_any_host": true, 00:09:03.119 "hosts": [], 00:09:03.119 "serial_number": "SPDK00000000000003", 00:09:03.119 "model_number": "SPDK bdev Controller", 00:09:03.119 "max_namespaces": 32, 00:09:03.119 "min_cntlid": 1, 00:09:03.119 "max_cntlid": 65519, 00:09:03.119 "namespaces": [ 00:09:03.119 { 00:09:03.119 "nsid": 1, 00:09:03.119 "bdev_name": "Null3", 00:09:03.119 "name": "Null3", 00:09:03.119 "nguid": "68D77295F5704B658D3DAB321EFF917B", 00:09:03.119 "uuid": "68d77295-f570-4b65-8d3d-ab321eff917b" 00:09:03.119 } 00:09:03.119 ] 00:09:03.119 }, 00:09:03.119 { 00:09:03.119 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:03.119 "subtype": "NVMe", 00:09:03.119 "listen_addresses": [ 00:09:03.119 { 00:09:03.119 "transport": "TCP", 00:09:03.119 "trtype": "TCP", 00:09:03.119 "adrfam": "IPv4", 00:09:03.119 "traddr": "10.0.0.2", 00:09:03.119 "trsvcid": "4420" 00:09:03.119 } 00:09:03.119 ], 00:09:03.119 "allow_any_host": true, 00:09:03.119 "hosts": [], 00:09:03.119 "serial_number": "SPDK00000000000004", 00:09:03.119 "model_number": "SPDK bdev Controller", 00:09:03.119 "max_namespaces": 32, 00:09:03.119 "min_cntlid": 1, 00:09:03.119 "max_cntlid": 65519, 00:09:03.119 "namespaces": [ 00:09:03.119 { 00:09:03.119 "nsid": 1, 00:09:03.119 "bdev_name": "Null4", 00:09:03.119 "name": "Null4", 00:09:03.119 "nguid": "52F5C1D9FA5C4509BF903D88B89725CF", 00:09:03.119 "uuid": "52f5c1d9-fa5c-4509-bf90-3d88b89725cf" 00:09:03.119 } 00:09:03.119 ] 00:09:03.119 } 00:09:03.119 ] 00:09:03.119 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.119 05:04:40 -- target/discovery.sh@42 -- # seq 1 4 00:09:03.119 05:04:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:03.119 05:04:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.119 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.119 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:03.378 05:04:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:03.378 05:04:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:03.378 05:04:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:03.378 05:04:40 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:03.378 05:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.378 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 05:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.378 05:04:40 -- target/discovery.sh@49 -- # check_bdevs= 00:09:03.378 05:04:40 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:03.378 05:04:40 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:03.378 05:04:40 -- target/discovery.sh@57 -- # nvmftestfini 00:09:03.378 05:04:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:03.378 05:04:40 -- nvmf/common.sh@117 -- # sync 00:09:03.378 05:04:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.378 05:04:40 -- nvmf/common.sh@120 -- # set +e 00:09:03.378 05:04:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.378 05:04:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.378 rmmod nvme_tcp 00:09:03.378 rmmod nvme_fabrics 00:09:03.378 rmmod nvme_keyring 00:09:03.378 05:04:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.378 05:04:40 -- nvmf/common.sh@124 -- # set -e 00:09:03.378 05:04:40 -- nvmf/common.sh@125 -- # return 0 00:09:03.378 05:04:40 -- nvmf/common.sh@478 -- # '[' -n 1789794 ']' 00:09:03.378 05:04:40 -- nvmf/common.sh@479 -- # killprocess 1789794 00:09:03.378 05:04:40 -- common/autotest_common.sh@936 -- # '[' -z 1789794 ']' 00:09:03.378 05:04:40 -- common/autotest_common.sh@940 -- # kill -0 1789794 00:09:03.378 05:04:40 -- common/autotest_common.sh@941 -- # uname 00:09:03.378 05:04:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.378 05:04:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1789794 00:09:03.378 05:04:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.378 05:04:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.378 05:04:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1789794' 00:09:03.378 killing process with pid 1789794 00:09:03.378 05:04:40 -- common/autotest_common.sh@955 -- # kill 1789794 00:09:03.378 [2024-04-24 05:04:40.584368] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:03.378 05:04:40 -- common/autotest_common.sh@960 -- # wait 1789794 00:09:03.637 05:04:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:03.637 05:04:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:03.637 05:04:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:03.637 05:04:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.637 05:04:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.637 05:04:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.637 05:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.637 05:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.175 05:04:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.175 00:09:06.175 real 0m5.350s 00:09:06.175 user 0m4.402s 00:09:06.175 sys 0m1.796s 00:09:06.175 05:04:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:06.175 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:09:06.175 ************************************ 00:09:06.175 END TEST nvmf_discovery 00:09:06.175 ************************************ 00:09:06.175 05:04:42 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:06.175 05:04:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:06.175 05:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.175 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:09:06.175 ************************************ 00:09:06.175 START TEST nvmf_referrals 00:09:06.175 ************************************ 00:09:06.175 05:04:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:06.175 * Looking for test storage... 00:09:06.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.175 05:04:43 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.175 05:04:43 -- nvmf/common.sh@7 -- # uname -s 00:09:06.175 05:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.175 05:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.175 05:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.175 05:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.175 05:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.175 05:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.175 05:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.175 05:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.175 05:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.175 05:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.175 05:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.175 05:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.175 05:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.175 05:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.175 05:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.175 05:04:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.175 05:04:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.175 05:04:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.175 05:04:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.175 05:04:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.175 05:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.176 05:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.176 05:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.176 05:04:43 -- paths/export.sh@5 -- # export PATH 00:09:06.176 05:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.176 05:04:43 -- nvmf/common.sh@47 -- # : 0 00:09:06.176 05:04:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.176 05:04:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.176 05:04:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.176 05:04:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.176 05:04:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.176 05:04:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.176 05:04:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.176 05:04:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.176 05:04:43 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:06.176 05:04:43 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:06.176 05:04:43 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:06.176 05:04:43 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:06.176 05:04:43 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:06.176 05:04:43 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:06.176 05:04:43 -- target/referrals.sh@37 -- # nvmftestinit 00:09:06.176 05:04:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:06.176 05:04:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.176 05:04:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:06.176 05:04:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:06.176 05:04:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:06.176 05:04:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.176 05:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.176 05:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.176 05:04:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:06.176 05:04:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:06.176 05:04:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.176 05:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:08.080 05:04:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:08.080 05:04:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.080 05:04:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.080 05:04:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.080 05:04:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.080 05:04:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.080 05:04:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.080 05:04:45 -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.080 05:04:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.080 05:04:45 -- nvmf/common.sh@296 -- # e810=() 00:09:08.080 05:04:45 -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.080 05:04:45 -- nvmf/common.sh@297 -- # x722=() 00:09:08.080 05:04:45 -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.080 05:04:45 -- nvmf/common.sh@298 -- # mlx=() 00:09:08.080 05:04:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.080 05:04:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.080 05:04:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.080 05:04:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.080 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.080 05:04:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.080 05:04:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.080 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.080 05:04:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.080 05:04:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.080 05:04:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.080 05:04:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.080 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.080 05:04:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.080 05:04:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.080 05:04:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.080 05:04:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.080 05:04:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:08.080 05:04:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:08.080 05:04:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.080 05:04:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.080 05:04:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.080 05:04:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.080 05:04:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.080 05:04:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.080 05:04:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.080 05:04:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.080 05:04:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.080 05:04:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.080 05:04:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.080 05:04:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.080 05:04:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.080 05:04:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.080 05:04:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.080 05:04:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.080 05:04:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.080 05:04:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.080 05:04:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:09:08.080 00:09:08.080 --- 10.0.0.2 ping statistics --- 00:09:08.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.080 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:08.080 05:04:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:09:08.080 00:09:08.080 --- 10.0.0.1 ping statistics --- 00:09:08.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.080 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:08.080 05:04:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.080 05:04:45 -- nvmf/common.sh@411 -- # return 0 00:09:08.080 05:04:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:08.080 05:04:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.080 05:04:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:08.080 05:04:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.081 05:04:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:08.081 05:04:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:08.081 05:04:45 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:08.081 05:04:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:08.081 05:04:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:08.081 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.081 05:04:45 -- nvmf/common.sh@470 -- # nvmfpid=1791891 00:09:08.081 05:04:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.081 05:04:45 -- nvmf/common.sh@471 -- # waitforlisten 1791891 00:09:08.081 05:04:45 -- common/autotest_common.sh@817 -- # '[' -z 1791891 ']' 00:09:08.081 05:04:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.081 05:04:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:08.081 05:04:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.081 05:04:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:08.081 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.081 [2024-04-24 05:04:45.213455] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:09:08.081 [2024-04-24 05:04:45.213543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.081 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.081 [2024-04-24 05:04:45.251042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.081 [2024-04-24 05:04:45.278706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.339 [2024-04-24 05:04:45.368609] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.339 [2024-04-24 05:04:45.368679] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.339 [2024-04-24 05:04:45.368706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.339 [2024-04-24 05:04:45.368720] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.339 [2024-04-24 05:04:45.368732] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.339 [2024-04-24 05:04:45.368798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.339 [2024-04-24 05:04:45.368851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.339 [2024-04-24 05:04:45.368964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.339 [2024-04-24 05:04:45.368966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.339 05:04:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:08.339 05:04:45 -- common/autotest_common.sh@850 -- # return 0 00:09:08.339 05:04:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:08.339 05:04:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 05:04:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.339 05:04:45 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 [2024-04-24 05:04:45.527528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 [2024-04-24 05:04:45.539799] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.339 05:04:45 -- target/referrals.sh@48 -- # jq length 00:09:08.339 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.339 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.339 05:04:45 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:08.604 05:04:45 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:08.604 05:04:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:08.604 05:04:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.604 05:04:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:08.604 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.604 05:04:45 -- target/referrals.sh@21 -- # sort 00:09:08.604 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.604 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:08.604 05:04:45 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:08.604 05:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.604 05:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # sort 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:08.604 05:04:45 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:08.604 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.604 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.604 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:08.604 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.604 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.604 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:08.604 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.604 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.604 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.604 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.604 05:04:45 -- target/referrals.sh@56 -- # jq length 00:09:08.604 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.604 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.604 05:04:45 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:08.604 05:04:45 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:08.604 05:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.604 05:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.604 05:04:45 -- target/referrals.sh@26 -- # sort 00:09:08.863 05:04:45 -- target/referrals.sh@26 -- # echo 00:09:08.863 05:04:45 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:08.863 05:04:45 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:08.863 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.863 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.863 05:04:45 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:08.863 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.863 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 05:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.863 05:04:45 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:08.863 05:04:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:08.863 05:04:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.863 05:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.863 05:04:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:08.863 05:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 05:04:45 -- target/referrals.sh@21 -- # sort 00:09:08.863 05:04:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.863 05:04:46 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:08.863 05:04:46 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:08.863 05:04:46 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:08.863 05:04:46 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.863 05:04:46 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.863 05:04:46 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.863 05:04:46 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.863 05:04:46 -- target/referrals.sh@26 -- # sort 00:09:08.863 05:04:46 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:08.863 05:04:46 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:08.863 05:04:46 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:08.863 05:04:46 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:08.863 05:04:46 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:08.863 05:04:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.863 05:04:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:09.121 05:04:46 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:09.121 05:04:46 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:09.121 05:04:46 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:09.121 05:04:46 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:09.121 05:04:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.121 05:04:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:09.121 05:04:46 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:09.121 05:04:46 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:09.121 05:04:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.121 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 05:04:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.121 05:04:46 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:09.121 05:04:46 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:09.121 05:04:46 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.121 05:04:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.121 05:04:46 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:09.121 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 05:04:46 -- target/referrals.sh@21 -- # sort 00:09:09.121 05:04:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.378 05:04:46 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:09.378 05:04:46 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:09.378 05:04:46 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:09.378 05:04:46 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.378 05:04:46 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.378 05:04:46 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.378 05:04:46 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.378 05:04:46 -- target/referrals.sh@26 -- # sort 00:09:09.379 05:04:46 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:09.379 05:04:46 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:09.379 05:04:46 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:09.379 05:04:46 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:09.379 05:04:46 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:09.379 05:04:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.379 05:04:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:09.638 05:04:46 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:09.638 05:04:46 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:09.638 05:04:46 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:09.638 05:04:46 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:09.638 05:04:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.638 05:04:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:09.638 05:04:46 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:09.638 05:04:46 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:09.638 05:04:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.638 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.638 05:04:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.638 05:04:46 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.638 05:04:46 -- target/referrals.sh@82 -- # jq length 00:09:09.638 05:04:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.638 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.638 05:04:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.638 05:04:46 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:09.638 05:04:46 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:09.638 05:04:46 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.638 05:04:46 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.638 05:04:46 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.638 05:04:46 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.638 05:04:46 -- target/referrals.sh@26 -- # sort 00:09:09.904 05:04:46 -- target/referrals.sh@26 -- # echo 00:09:09.904 05:04:46 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:09.904 05:04:46 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:09.904 05:04:46 -- target/referrals.sh@86 -- # nvmftestfini 00:09:09.904 05:04:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:09.904 05:04:46 -- nvmf/common.sh@117 -- # sync 00:09:09.904 05:04:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.904 05:04:46 -- nvmf/common.sh@120 -- # set +e 00:09:09.904 05:04:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.904 05:04:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.904 rmmod nvme_tcp 00:09:09.904 rmmod nvme_fabrics 00:09:09.904 rmmod nvme_keyring 00:09:09.904 05:04:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.904 05:04:47 -- nvmf/common.sh@124 -- # set -e 00:09:09.904 05:04:47 -- nvmf/common.sh@125 -- # return 0 00:09:09.904 05:04:47 -- nvmf/common.sh@478 -- # '[' -n 1791891 ']' 00:09:09.904 05:04:47 -- nvmf/common.sh@479 -- # killprocess 1791891 00:09:09.904 05:04:47 -- common/autotest_common.sh@936 -- # '[' -z 1791891 ']' 00:09:09.904 05:04:47 -- common/autotest_common.sh@940 -- # kill -0 1791891 00:09:09.904 05:04:47 -- common/autotest_common.sh@941 -- # uname 00:09:09.904 05:04:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.904 05:04:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1791891 00:09:09.904 05:04:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.904 05:04:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.904 05:04:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1791891' 00:09:09.904 killing process with pid 1791891 00:09:09.904 05:04:47 -- common/autotest_common.sh@955 -- # kill 1791891 00:09:09.904 05:04:47 -- common/autotest_common.sh@960 -- # wait 1791891 00:09:10.163 05:04:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:10.163 05:04:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:10.163 05:04:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:10.163 05:04:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.163 05:04:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.163 05:04:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.163 05:04:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.163 05:04:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.694 05:04:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.694 00:09:12.694 real 0m6.365s 00:09:12.694 user 0m8.947s 00:09:12.694 sys 0m2.124s 00:09:12.694 05:04:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:12.694 05:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:12.694 ************************************ 00:09:12.694 END TEST nvmf_referrals 00:09:12.694 ************************************ 00:09:12.694 05:04:49 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:12.694 05:04:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:12.694 05:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.694 05:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:12.694 ************************************ 00:09:12.694 START TEST nvmf_connect_disconnect 00:09:12.694 ************************************ 00:09:12.694 05:04:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:12.694 * Looking for test storage... 00:09:12.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.694 05:04:49 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.694 05:04:49 -- nvmf/common.sh@7 -- # uname -s 00:09:12.694 05:04:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.694 05:04:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.694 05:04:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.694 05:04:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.694 05:04:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.694 05:04:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.694 05:04:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.694 05:04:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.694 05:04:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.694 05:04:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.694 05:04:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.694 05:04:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.694 05:04:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.694 05:04:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.694 05:04:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.694 05:04:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.695 05:04:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.695 05:04:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.695 05:04:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.695 05:04:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.695 05:04:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.695 05:04:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.695 05:04:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.695 05:04:49 -- paths/export.sh@5 -- # export PATH 00:09:12.695 05:04:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.695 05:04:49 -- nvmf/common.sh@47 -- # : 0 00:09:12.695 05:04:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.695 05:04:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.695 05:04:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.695 05:04:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.695 05:04:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.695 05:04:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.695 05:04:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.695 05:04:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.695 05:04:49 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.695 05:04:49 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.695 05:04:49 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:12.695 05:04:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:12.695 05:04:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.695 05:04:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:12.695 05:04:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:12.695 05:04:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:12.695 05:04:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.695 05:04:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.695 05:04:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.695 05:04:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:12.695 05:04:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:12.695 05:04:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.695 05:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:14.597 05:04:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:14.597 05:04:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.597 05:04:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.597 05:04:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.597 05:04:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.597 05:04:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.597 05:04:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.597 05:04:51 -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.597 05:04:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.597 05:04:51 -- nvmf/common.sh@296 -- # e810=() 00:09:14.597 05:04:51 -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.597 05:04:51 -- nvmf/common.sh@297 -- # x722=() 00:09:14.598 05:04:51 -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.598 05:04:51 -- nvmf/common.sh@298 -- # mlx=() 00:09:14.598 05:04:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.598 05:04:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.598 05:04:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.598 05:04:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.598 05:04:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.598 05:04:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.598 05:04:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.598 05:04:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.598 05:04:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.598 05:04:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.598 05:04:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.598 05:04:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.598 05:04:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.598 05:04:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.598 05:04:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:14.598 05:04:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:14.598 05:04:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.598 05:04:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.598 05:04:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.598 05:04:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.598 05:04:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.598 05:04:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.598 05:04:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.598 05:04:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.598 05:04:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.598 05:04:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.598 05:04:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.598 05:04:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.598 05:04:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.598 05:04:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.598 05:04:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.598 05:04:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.598 05:04:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.598 05:04:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.598 05:04:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:14.598 00:09:14.598 --- 10.0.0.2 ping statistics --- 00:09:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.598 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:14.598 05:04:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:14.598 00:09:14.598 --- 10.0.0.1 ping statistics --- 00:09:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.598 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:14.598 05:04:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.598 05:04:51 -- nvmf/common.sh@411 -- # return 0 00:09:14.598 05:04:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:14.598 05:04:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.598 05:04:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:14.598 05:04:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.598 05:04:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:14.598 05:04:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:14.598 05:04:51 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:14.598 05:04:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:14.598 05:04:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:14.598 05:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:14.598 05:04:51 -- nvmf/common.sh@470 -- # nvmfpid=1794193 00:09:14.598 05:04:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.598 05:04:51 -- nvmf/common.sh@471 -- # waitforlisten 1794193 00:09:14.598 05:04:51 -- common/autotest_common.sh@817 -- # '[' -z 1794193 ']' 00:09:14.598 05:04:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.598 05:04:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.598 05:04:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.598 05:04:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.598 05:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:14.598 [2024-04-24 05:04:51.761592] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:09:14.598 [2024-04-24 05:04:51.761681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.598 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.598 [2024-04-24 05:04:51.797763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.598 [2024-04-24 05:04:51.828306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.857 [2024-04-24 05:04:51.918367] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.857 [2024-04-24 05:04:51.918434] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.857 [2024-04-24 05:04:51.918451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.857 [2024-04-24 05:04:51.918465] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.857 [2024-04-24 05:04:51.918478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.857 [2024-04-24 05:04:51.918564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.857 [2024-04-24 05:04:51.918618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.857 [2024-04-24 05:04:51.918736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.857 [2024-04-24 05:04:51.918743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.857 05:04:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:14.857 05:04:52 -- common/autotest_common.sh@850 -- # return 0 00:09:14.857 05:04:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:14.857 05:04:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:14.857 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.857 05:04:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.857 05:04:52 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:14.857 05:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.857 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.857 [2024-04-24 05:04:52.066421] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.857 05:04:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.857 05:04:52 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:14.857 05:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.857 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.857 05:04:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.857 05:04:52 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:14.858 05:04:52 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.858 05:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.858 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 05:04:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.858 05:04:52 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.858 05:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.858 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 05:04:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.858 05:04:52 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.858 05:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.858 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 [2024-04-24 05:04:52.127957] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.116 05:04:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.116 05:04:52 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:15.116 05:04:52 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:15.116 05:04:52 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:15.116 05:04:52 -- target/connect_disconnect.sh@34 -- # set +x 00:09:17.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.082 05:08:42 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:06.083 05:08:42 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:06.083 05:08:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:06.083 05:08:42 -- nvmf/common.sh@117 -- # sync 00:13:06.083 05:08:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.083 05:08:42 -- nvmf/common.sh@120 -- # set +e 00:13:06.083 05:08:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.083 05:08:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.083 rmmod nvme_tcp 00:13:06.083 rmmod nvme_fabrics 00:13:06.083 rmmod nvme_keyring 00:13:06.083 05:08:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.083 05:08:42 -- nvmf/common.sh@124 -- # set -e 00:13:06.083 05:08:42 -- nvmf/common.sh@125 -- # return 0 00:13:06.083 05:08:42 -- nvmf/common.sh@478 -- # '[' -n 1794193 ']' 00:13:06.083 05:08:42 -- nvmf/common.sh@479 -- # killprocess 1794193 00:13:06.083 05:08:42 -- common/autotest_common.sh@936 -- # '[' -z 1794193 ']' 00:13:06.083 05:08:42 -- common/autotest_common.sh@940 -- # kill -0 1794193 00:13:06.083 05:08:42 -- common/autotest_common.sh@941 -- # uname 00:13:06.083 05:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:06.083 05:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1794193 00:13:06.083 05:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:06.083 05:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:06.083 05:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1794193' 00:13:06.083 killing process with pid 1794193 00:13:06.083 05:08:42 -- common/autotest_common.sh@955 -- # kill 1794193 00:13:06.083 05:08:42 -- common/autotest_common.sh@960 -- # wait 1794193 00:13:06.083 05:08:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:06.083 05:08:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:06.083 05:08:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:06.083 05:08:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.083 05:08:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.083 05:08:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.083 05:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.083 05:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.990 05:08:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.249 00:13:08.249 real 3m55.783s 00:13:08.249 user 14m58.014s 00:13:08.249 sys 0m34.571s 00:13:08.249 05:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.249 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.249 ************************************ 00:13:08.249 END TEST nvmf_connect_disconnect 00:13:08.249 ************************************ 00:13:08.249 05:08:45 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:08.249 05:08:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.249 05:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.249 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.249 ************************************ 00:13:08.249 START TEST nvmf_multitarget 00:13:08.249 ************************************ 00:13:08.249 05:08:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:08.249 * Looking for test storage... 00:13:08.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.249 05:08:45 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.249 05:08:45 -- nvmf/common.sh@7 -- # uname -s 00:13:08.249 05:08:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.249 05:08:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.249 05:08:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.249 05:08:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.249 05:08:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.249 05:08:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.249 05:08:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.249 05:08:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.249 05:08:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.249 05:08:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.249 05:08:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.249 05:08:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.249 05:08:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.249 05:08:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.249 05:08:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.249 05:08:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.249 05:08:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.249 05:08:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.249 05:08:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.249 05:08:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.249 05:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.249 05:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.249 05:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.249 05:08:45 -- paths/export.sh@5 -- # export PATH 00:13:08.249 05:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.249 05:08:45 -- nvmf/common.sh@47 -- # : 0 00:13:08.249 05:08:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.249 05:08:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.249 05:08:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.249 05:08:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.249 05:08:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.249 05:08:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.249 05:08:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.249 05:08:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.249 05:08:45 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:08.249 05:08:45 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:08.249 05:08:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:08.249 05:08:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.249 05:08:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.249 05:08:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.249 05:08:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.249 05:08:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.249 05:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.249 05:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.249 05:08:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:08.249 05:08:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:08.249 05:08:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.249 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.152 05:08:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:10.152 05:08:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.152 05:08:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.152 05:08:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.152 05:08:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.152 05:08:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.152 05:08:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.152 05:08:47 -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.152 05:08:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.152 05:08:47 -- nvmf/common.sh@296 -- # e810=() 00:13:10.152 05:08:47 -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.152 05:08:47 -- nvmf/common.sh@297 -- # x722=() 00:13:10.152 05:08:47 -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.152 05:08:47 -- nvmf/common.sh@298 -- # mlx=() 00:13:10.152 05:08:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.152 05:08:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.152 05:08:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.152 05:08:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.152 05:08:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.152 05:08:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.152 05:08:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.152 05:08:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.152 05:08:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.152 05:08:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.152 05:08:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.152 05:08:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.152 05:08:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.152 05:08:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.152 05:08:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.152 05:08:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.152 05:08:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.152 05:08:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.152 05:08:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.152 05:08:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.152 05:08:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.152 05:08:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.152 05:08:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:10.152 05:08:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:10.152 05:08:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:10.152 05:08:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:10.152 05:08:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.152 05:08:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.152 05:08:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.153 05:08:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.153 05:08:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.153 05:08:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.153 05:08:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.153 05:08:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.153 05:08:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.153 05:08:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.153 05:08:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.153 05:08:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.153 05:08:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.411 05:08:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.411 05:08:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.411 05:08:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.411 05:08:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.411 05:08:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.411 05:08:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.411 05:08:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:13:10.411 00:13:10.411 --- 10.0.0.2 ping statistics --- 00:13:10.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.411 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:10.411 05:08:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:13:10.411 00:13:10.411 --- 10.0.0.1 ping statistics --- 00:13:10.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.411 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:10.411 05:08:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.411 05:08:47 -- nvmf/common.sh@411 -- # return 0 00:13:10.411 05:08:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:10.411 05:08:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.411 05:08:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:10.411 05:08:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:10.411 05:08:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.411 05:08:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:10.411 05:08:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:10.411 05:08:47 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:10.411 05:08:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:10.411 05:08:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:10.411 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:10.411 05:08:47 -- nvmf/common.sh@470 -- # nvmfpid=1825158 00:13:10.411 05:08:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.411 05:08:47 -- nvmf/common.sh@471 -- # waitforlisten 1825158 00:13:10.411 05:08:47 -- common/autotest_common.sh@817 -- # '[' -z 1825158 ']' 00:13:10.411 05:08:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.411 05:08:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.411 05:08:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.411 05:08:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.411 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:10.411 [2024-04-24 05:08:47.577392] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:13:10.411 [2024-04-24 05:08:47.577474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.411 [2024-04-24 05:08:47.615442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.411 [2024-04-24 05:08:47.647486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.669 [2024-04-24 05:08:47.736690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.669 [2024-04-24 05:08:47.736755] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.669 [2024-04-24 05:08:47.736772] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.669 [2024-04-24 05:08:47.736785] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.669 [2024-04-24 05:08:47.736797] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.669 [2024-04-24 05:08:47.736886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.669 [2024-04-24 05:08:47.736921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.669 [2024-04-24 05:08:47.737041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.669 [2024-04-24 05:08:47.737044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.669 05:08:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.669 05:08:47 -- common/autotest_common.sh@850 -- # return 0 00:13:10.669 05:08:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:10.669 05:08:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.669 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:10.669 05:08:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.669 05:08:47 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.669 05:08:47 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.669 05:08:47 -- target/multitarget.sh@21 -- # jq length 00:13:10.927 05:08:47 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.927 05:08:47 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.927 "nvmf_tgt_1" 00:13:10.927 05:08:48 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.927 "nvmf_tgt_2" 00:13:11.186 05:08:48 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.186 05:08:48 -- target/multitarget.sh@28 -- # jq length 00:13:11.186 05:08:48 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:11.186 05:08:48 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:11.186 true 00:13:11.186 05:08:48 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:11.444 true 00:13:11.445 05:08:48 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.445 05:08:48 -- target/multitarget.sh@35 -- # jq length 00:13:11.445 05:08:48 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:11.445 05:08:48 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:11.445 05:08:48 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:11.445 05:08:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:11.445 05:08:48 -- nvmf/common.sh@117 -- # sync 00:13:11.445 05:08:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.445 05:08:48 -- nvmf/common.sh@120 -- # set +e 00:13:11.445 05:08:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.445 05:08:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.445 rmmod nvme_tcp 00:13:11.445 rmmod nvme_fabrics 00:13:11.445 rmmod nvme_keyring 00:13:11.445 05:08:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.445 05:08:48 -- nvmf/common.sh@124 -- # set -e 00:13:11.445 05:08:48 -- nvmf/common.sh@125 -- # return 0 00:13:11.445 05:08:48 -- nvmf/common.sh@478 -- # '[' -n 1825158 ']' 00:13:11.445 05:08:48 -- nvmf/common.sh@479 -- # killprocess 1825158 00:13:11.445 05:08:48 -- common/autotest_common.sh@936 -- # '[' -z 1825158 ']' 00:13:11.445 05:08:48 -- common/autotest_common.sh@940 -- # kill -0 1825158 00:13:11.445 05:08:48 -- common/autotest_common.sh@941 -- # uname 00:13:11.445 05:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.445 05:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1825158 00:13:11.703 05:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.703 05:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.703 05:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1825158' 00:13:11.703 killing process with pid 1825158 00:13:11.703 05:08:48 -- common/autotest_common.sh@955 -- # kill 1825158 00:13:11.703 05:08:48 -- common/autotest_common.sh@960 -- # wait 1825158 00:13:11.703 05:08:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:11.703 05:08:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:11.703 05:08:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:11.703 05:08:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.703 05:08:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.703 05:08:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.703 05:08:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.703 05:08:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.241 05:08:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.241 00:13:14.241 real 0m5.636s 00:13:14.241 user 0m6.226s 00:13:14.241 sys 0m1.919s 00:13:14.241 05:08:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.241 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:13:14.241 ************************************ 00:13:14.241 END TEST nvmf_multitarget 00:13:14.241 ************************************ 00:13:14.241 05:08:51 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:14.241 05:08:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:14.241 05:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.241 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:13:14.241 ************************************ 00:13:14.241 START TEST nvmf_rpc 00:13:14.241 ************************************ 00:13:14.241 05:08:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:14.241 * Looking for test storage... 00:13:14.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.241 05:08:51 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.241 05:08:51 -- nvmf/common.sh@7 -- # uname -s 00:13:14.241 05:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.241 05:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.241 05:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.241 05:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.241 05:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.241 05:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.241 05:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.241 05:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.241 05:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.241 05:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.241 05:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.241 05:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.241 05:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.241 05:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.241 05:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.241 05:08:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.241 05:08:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.241 05:08:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.241 05:08:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.241 05:08:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.241 05:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.241 05:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.241 05:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.241 05:08:51 -- paths/export.sh@5 -- # export PATH 00:13:14.242 05:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.242 05:08:51 -- nvmf/common.sh@47 -- # : 0 00:13:14.242 05:08:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.242 05:08:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.242 05:08:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.242 05:08:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.242 05:08:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.242 05:08:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.242 05:08:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.242 05:08:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.242 05:08:51 -- target/rpc.sh@11 -- # loops=5 00:13:14.242 05:08:51 -- target/rpc.sh@23 -- # nvmftestinit 00:13:14.242 05:08:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:14.242 05:08:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.242 05:08:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:14.242 05:08:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:14.242 05:08:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:14.242 05:08:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.242 05:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.242 05:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.242 05:08:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:14.242 05:08:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:14.242 05:08:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.242 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:13:16.144 05:08:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.144 05:08:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.144 05:08:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.144 05:08:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.144 05:08:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.144 05:08:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.144 05:08:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.144 05:08:53 -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.144 05:08:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.144 05:08:53 -- nvmf/common.sh@296 -- # e810=() 00:13:16.144 05:08:53 -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.144 05:08:53 -- nvmf/common.sh@297 -- # x722=() 00:13:16.144 05:08:53 -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.144 05:08:53 -- nvmf/common.sh@298 -- # mlx=() 00:13:16.144 05:08:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.144 05:08:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.144 05:08:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.144 05:08:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.144 05:08:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.144 05:08:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.144 05:08:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.144 05:08:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.144 05:08:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.145 05:08:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:16.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:16.145 05:08:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.145 05:08:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:16.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:16.145 05:08:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.145 05:08:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.145 05:08:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.145 05:08:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.145 05:08:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.145 05:08:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:16.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:16.145 05:08:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.145 05:08:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.145 05:08:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.145 05:08:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.145 05:08:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.145 05:08:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:16.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:16.145 05:08:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.145 05:08:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:16.145 05:08:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:16.145 05:08:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:16.145 05:08:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.145 05:08:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.145 05:08:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.145 05:08:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.145 05:08:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.145 05:08:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.145 05:08:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.145 05:08:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.145 05:08:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.145 05:08:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.145 05:08:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.145 05:08:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.145 05:08:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.145 05:08:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.145 05:08:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.145 05:08:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.145 05:08:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.145 05:08:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.145 05:08:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.145 05:08:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:13:16.145 00:13:16.145 --- 10.0.0.2 ping statistics --- 00:13:16.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.145 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:16.145 05:08:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:13:16.145 00:13:16.145 --- 10.0.0.1 ping statistics --- 00:13:16.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.145 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:16.145 05:08:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.145 05:08:53 -- nvmf/common.sh@411 -- # return 0 00:13:16.145 05:08:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.145 05:08:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.145 05:08:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:16.145 05:08:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.145 05:08:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:16.145 05:08:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:16.145 05:08:53 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:16.145 05:08:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:16.145 05:08:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:16.145 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.145 05:08:53 -- nvmf/common.sh@470 -- # nvmfpid=1827266 00:13:16.145 05:08:53 -- nvmf/common.sh@471 -- # waitforlisten 1827266 00:13:16.145 05:08:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.145 05:08:53 -- common/autotest_common.sh@817 -- # '[' -z 1827266 ']' 00:13:16.145 05:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.145 05:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.145 05:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.145 05:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.145 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.405 [2024-04-24 05:08:53.447244] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:13:16.405 [2024-04-24 05:08:53.447326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.405 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.405 [2024-04-24 05:08:53.491521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:16.405 [2024-04-24 05:08:53.522625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.405 [2024-04-24 05:08:53.618095] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.405 [2024-04-24 05:08:53.618153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.405 [2024-04-24 05:08:53.618169] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.405 [2024-04-24 05:08:53.618183] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.405 [2024-04-24 05:08:53.618195] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.405 [2024-04-24 05:08:53.618287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.405 [2024-04-24 05:08:53.618321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.405 [2024-04-24 05:08:53.618373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.405 [2024-04-24 05:08:53.618376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.664 05:08:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.664 05:08:53 -- common/autotest_common.sh@850 -- # return 0 00:13:16.664 05:08:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:16.664 05:08:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:16.664 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.664 05:08:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.664 05:08:53 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:16.664 05:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.664 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.664 05:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.664 05:08:53 -- target/rpc.sh@26 -- # stats='{ 00:13:16.664 "tick_rate": 2700000000, 00:13:16.664 "poll_groups": [ 00:13:16.664 { 00:13:16.664 "name": "nvmf_tgt_poll_group_0", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_1", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_2", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_3", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [] 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 }' 00:13:16.665 05:08:53 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:16.665 05:08:53 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:16.665 05:08:53 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:16.665 05:08:53 -- target/rpc.sh@15 -- # wc -l 00:13:16.665 05:08:53 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:16.665 05:08:53 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:16.665 05:08:53 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:16.665 05:08:53 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.665 05:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.665 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.665 [2024-04-24 05:08:53.866840] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.665 05:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.665 05:08:53 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:16.665 05:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.665 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.665 05:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.665 05:08:53 -- target/rpc.sh@33 -- # stats='{ 00:13:16.665 "tick_rate": 2700000000, 00:13:16.665 "poll_groups": [ 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_0", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [ 00:13:16.665 { 00:13:16.665 "trtype": "TCP" 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_1", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [ 00:13:16.665 { 00:13:16.665 "trtype": "TCP" 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_2", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [ 00:13:16.665 { 00:13:16.665 "trtype": "TCP" 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 }, 00:13:16.665 { 00:13:16.665 "name": "nvmf_tgt_poll_group_3", 00:13:16.665 "admin_qpairs": 0, 00:13:16.665 "io_qpairs": 0, 00:13:16.665 "current_admin_qpairs": 0, 00:13:16.665 "current_io_qpairs": 0, 00:13:16.665 "pending_bdev_io": 0, 00:13:16.665 "completed_nvme_io": 0, 00:13:16.665 "transports": [ 00:13:16.665 { 00:13:16.665 "trtype": "TCP" 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 } 00:13:16.665 ] 00:13:16.665 }' 00:13:16.665 05:08:53 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.665 05:08:53 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:16.665 05:08:53 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:16.665 05:08:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.923 05:08:53 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:16.923 05:08:53 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:16.923 05:08:53 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:16.923 05:08:53 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:16.923 05:08:53 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:16.923 05:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.923 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.923 Malloc1 00:13:16.923 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.923 05:08:54 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.923 05:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.924 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.924 05:08:54 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.924 05:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.924 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.924 05:08:54 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:16.924 05:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.924 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.924 05:08:54 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.924 05:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.924 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 [2024-04-24 05:08:54.028650] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.924 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.924 05:08:54 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:16.924 05:08:54 -- common/autotest_common.sh@638 -- # local es=0 00:13:16.924 05:08:54 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:16.924 05:08:54 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:16.924 05:08:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:16.924 05:08:54 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:16.924 05:08:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:16.924 05:08:54 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:16.924 05:08:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:16.924 05:08:54 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:16.924 05:08:54 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:16.924 05:08:54 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:16.924 [2024-04-24 05:08:54.051067] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:16.924 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:16.924 could not add new controller: failed to write to nvme-fabrics device 00:13:16.924 05:08:54 -- common/autotest_common.sh@641 -- # es=1 00:13:16.924 05:08:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:16.924 05:08:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:16.924 05:08:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:16.924 05:08:54 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.924 05:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.924 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 05:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.924 05:08:54 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.494 05:08:54 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.494 05:08:54 -- common/autotest_common.sh@1184 -- # local i=0 00:13:17.494 05:08:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.494 05:08:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:17.494 05:08:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:20.036 05:08:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:20.036 05:08:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:20.036 05:08:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.036 05:08:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:20.036 05:08:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.036 05:08:56 -- common/autotest_common.sh@1194 -- # return 0 00:13:20.036 05:08:56 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.036 05:08:56 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.036 05:08:56 -- common/autotest_common.sh@1205 -- # local i=0 00:13:20.036 05:08:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:20.036 05:08:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.036 05:08:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:20.036 05:08:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.036 05:08:56 -- common/autotest_common.sh@1217 -- # return 0 00:13:20.036 05:08:56 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.036 05:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.036 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:13:20.036 05:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.036 05:08:56 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.036 05:08:56 -- common/autotest_common.sh@638 -- # local es=0 00:13:20.036 05:08:56 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.036 05:08:56 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:20.036 05:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.036 05:08:56 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:20.036 05:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.036 05:08:56 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:20.036 05:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.036 05:08:56 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:20.036 05:08:56 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:20.036 05:08:56 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.036 [2024-04-24 05:08:56.835597] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:20.036 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:20.036 could not add new controller: failed to write to nvme-fabrics device 00:13:20.036 05:08:56 -- common/autotest_common.sh@641 -- # es=1 00:13:20.036 05:08:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:20.036 05:08:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:20.036 05:08:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:20.036 05:08:56 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:20.036 05:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.036 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:13:20.036 05:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.036 05:08:56 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.295 05:08:57 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.295 05:08:57 -- common/autotest_common.sh@1184 -- # local i=0 00:13:20.295 05:08:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.295 05:08:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:20.295 05:08:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:22.854 05:08:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:22.854 05:08:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:22.854 05:08:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.854 05:08:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:22.854 05:08:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.854 05:08:59 -- common/autotest_common.sh@1194 -- # return 0 00:13:22.854 05:08:59 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.854 05:08:59 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.854 05:08:59 -- common/autotest_common.sh@1205 -- # local i=0 00:13:22.854 05:08:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:22.854 05:08:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.854 05:08:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:22.854 05:08:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.854 05:08:59 -- common/autotest_common.sh@1217 -- # return 0 00:13:22.854 05:08:59 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.854 05:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.854 05:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.854 05:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.854 05:08:59 -- target/rpc.sh@81 -- # seq 1 5 00:13:22.854 05:08:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.854 05:08:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.854 05:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.854 05:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.854 05:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.854 05:08:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.854 05:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.854 05:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.854 [2024-04-24 05:08:59.662700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.854 05:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.854 05:08:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.854 05:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.854 05:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.854 05:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.854 05:08:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.854 05:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.854 05:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.854 05:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.854 05:08:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.112 05:09:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.112 05:09:00 -- common/autotest_common.sh@1184 -- # local i=0 00:13:23.112 05:09:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.112 05:09:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:23.112 05:09:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:25.642 05:09:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:25.642 05:09:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:25.642 05:09:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.642 05:09:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:25.642 05:09:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.642 05:09:02 -- common/autotest_common.sh@1194 -- # return 0 00:13:25.642 05:09:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.642 05:09:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.642 05:09:02 -- common/autotest_common.sh@1205 -- # local i=0 00:13:25.642 05:09:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:25.642 05:09:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.642 05:09:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:25.642 05:09:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.642 05:09:02 -- common/autotest_common.sh@1217 -- # return 0 00:13:25.642 05:09:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.642 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.642 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.642 05:09:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.642 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.642 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.642 05:09:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.642 05:09:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.642 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.642 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.642 05:09:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.642 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.642 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 [2024-04-24 05:09:02.472515] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.642 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.642 05:09:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.642 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.642 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.643 05:09:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.643 05:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.643 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.643 05:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.643 05:09:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.900 05:09:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.900 05:09:03 -- common/autotest_common.sh@1184 -- # local i=0 00:13:25.901 05:09:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.901 05:09:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:25.901 05:09:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:28.435 05:09:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:28.435 05:09:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:28.435 05:09:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.435 05:09:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:28.435 05:09:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.435 05:09:05 -- common/autotest_common.sh@1194 -- # return 0 00:13:28.435 05:09:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.435 05:09:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.435 05:09:05 -- common/autotest_common.sh@1205 -- # local i=0 00:13:28.435 05:09:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:28.435 05:09:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.435 05:09:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:28.435 05:09:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.435 05:09:05 -- common/autotest_common.sh@1217 -- # return 0 00:13:28.435 05:09:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.435 05:09:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 [2024-04-24 05:09:05.239173] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.435 05:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.435 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 05:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.435 05:09:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.692 05:09:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.692 05:09:05 -- common/autotest_common.sh@1184 -- # local i=0 00:13:28.692 05:09:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.692 05:09:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:28.692 05:09:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:30.595 05:09:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:30.595 05:09:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:30.595 05:09:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.854 05:09:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:30.855 05:09:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.855 05:09:07 -- common/autotest_common.sh@1194 -- # return 0 00:13:30.855 05:09:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.855 05:09:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.855 05:09:07 -- common/autotest_common.sh@1205 -- # local i=0 00:13:30.855 05:09:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:30.855 05:09:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.855 05:09:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:30.855 05:09:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.855 05:09:07 -- common/autotest_common.sh@1217 -- # return 0 00:13:30.855 05:09:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.855 05:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 05:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.855 05:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 05:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:30.855 05:09:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.855 05:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 05:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.855 05:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 [2024-04-24 05:09:08.017464] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.855 05:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:30.855 05:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 05:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.855 05:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.855 05:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 05:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.855 05:09:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.799 05:09:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.799 05:09:08 -- common/autotest_common.sh@1184 -- # local i=0 00:13:31.799 05:09:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.799 05:09:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:31.799 05:09:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:33.704 05:09:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:33.704 05:09:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:33.704 05:09:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.704 05:09:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:33.704 05:09:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.704 05:09:10 -- common/autotest_common.sh@1194 -- # return 0 00:13:33.704 05:09:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.704 05:09:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.704 05:09:10 -- common/autotest_common.sh@1205 -- # local i=0 00:13:33.704 05:09:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:33.704 05:09:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.704 05:09:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:33.704 05:09:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.704 05:09:10 -- common/autotest_common.sh@1217 -- # return 0 00:13:33.704 05:09:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:33.704 05:09:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 [2024-04-24 05:09:10.858981] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:33.704 05:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.704 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.704 05:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.704 05:09:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.269 05:09:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.269 05:09:11 -- common/autotest_common.sh@1184 -- # local i=0 00:13:34.269 05:09:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.269 05:09:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:34.270 05:09:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:36.804 05:09:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:36.804 05:09:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:36.804 05:09:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:36.804 05:09:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.804 05:09:13 -- common/autotest_common.sh@1194 -- # return 0 00:13:36.804 05:09:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.804 05:09:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@1205 -- # local i=0 00:13:36.804 05:09:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:36.804 05:09:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:36.804 05:09:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@1217 -- # return 0 00:13:36.804 05:09:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@99 -- # seq 1 5 00:13:36.804 05:09:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.804 05:09:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 [2024-04-24 05:09:13.629203] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.804 05:09:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 [2024-04-24 05:09:13.677301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.804 05:09:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 [2024-04-24 05:09:13.725466] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.804 05:09:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.804 05:09:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.804 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.804 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 [2024-04-24 05:09:13.773650] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.804 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.805 05:09:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 [2024-04-24 05:09:13.821839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:36.805 05:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.805 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.805 05:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.805 05:09:13 -- target/rpc.sh@110 -- # stats='{ 00:13:36.805 "tick_rate": 2700000000, 00:13:36.805 "poll_groups": [ 00:13:36.805 { 00:13:36.805 "name": "nvmf_tgt_poll_group_0", 00:13:36.805 "admin_qpairs": 2, 00:13:36.805 "io_qpairs": 84, 00:13:36.805 "current_admin_qpairs": 0, 00:13:36.805 "current_io_qpairs": 0, 00:13:36.805 "pending_bdev_io": 0, 00:13:36.805 "completed_nvme_io": 140, 00:13:36.805 "transports": [ 00:13:36.805 { 00:13:36.805 "trtype": "TCP" 00:13:36.805 } 00:13:36.805 ] 00:13:36.805 }, 00:13:36.805 { 00:13:36.805 "name": "nvmf_tgt_poll_group_1", 00:13:36.805 "admin_qpairs": 2, 00:13:36.805 "io_qpairs": 84, 00:13:36.805 "current_admin_qpairs": 0, 00:13:36.805 "current_io_qpairs": 0, 00:13:36.805 "pending_bdev_io": 0, 00:13:36.805 "completed_nvme_io": 182, 00:13:36.805 "transports": [ 00:13:36.805 { 00:13:36.805 "trtype": "TCP" 00:13:36.805 } 00:13:36.805 ] 00:13:36.805 }, 00:13:36.805 { 00:13:36.805 "name": "nvmf_tgt_poll_group_2", 00:13:36.805 "admin_qpairs": 1, 00:13:36.805 "io_qpairs": 84, 00:13:36.805 "current_admin_qpairs": 0, 00:13:36.805 "current_io_qpairs": 0, 00:13:36.805 "pending_bdev_io": 0, 00:13:36.805 "completed_nvme_io": 233, 00:13:36.805 "transports": [ 00:13:36.805 { 00:13:36.805 "trtype": "TCP" 00:13:36.805 } 00:13:36.805 ] 00:13:36.805 }, 00:13:36.805 { 00:13:36.805 "name": "nvmf_tgt_poll_group_3", 00:13:36.805 "admin_qpairs": 2, 00:13:36.805 "io_qpairs": 84, 00:13:36.805 "current_admin_qpairs": 0, 00:13:36.805 "current_io_qpairs": 0, 00:13:36.805 "pending_bdev_io": 0, 00:13:36.805 "completed_nvme_io": 131, 00:13:36.805 "transports": [ 00:13:36.805 { 00:13:36.805 "trtype": "TCP" 00:13:36.805 } 00:13:36.805 ] 00:13:36.805 } 00:13:36.805 ] 00:13:36.805 }' 00:13:36.805 05:09:13 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.805 05:09:13 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:36.805 05:09:13 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:36.805 05:09:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.805 05:09:13 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:36.805 05:09:13 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:36.805 05:09:13 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:36.805 05:09:13 -- target/rpc.sh@123 -- # nvmftestfini 00:13:36.805 05:09:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:36.805 05:09:13 -- nvmf/common.sh@117 -- # sync 00:13:36.805 05:09:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.805 05:09:13 -- nvmf/common.sh@120 -- # set +e 00:13:36.805 05:09:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.805 05:09:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.805 rmmod nvme_tcp 00:13:36.805 rmmod nvme_fabrics 00:13:36.805 rmmod nvme_keyring 00:13:36.805 05:09:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.805 05:09:14 -- nvmf/common.sh@124 -- # set -e 00:13:36.805 05:09:14 -- nvmf/common.sh@125 -- # return 0 00:13:36.805 05:09:14 -- nvmf/common.sh@478 -- # '[' -n 1827266 ']' 00:13:36.805 05:09:14 -- nvmf/common.sh@479 -- # killprocess 1827266 00:13:36.805 05:09:14 -- common/autotest_common.sh@936 -- # '[' -z 1827266 ']' 00:13:36.805 05:09:14 -- common/autotest_common.sh@940 -- # kill -0 1827266 00:13:36.805 05:09:14 -- common/autotest_common.sh@941 -- # uname 00:13:36.805 05:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.805 05:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1827266 00:13:36.805 05:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:36.805 05:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:36.805 05:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1827266' 00:13:36.805 killing process with pid 1827266 00:13:36.805 05:09:14 -- common/autotest_common.sh@955 -- # kill 1827266 00:13:36.805 05:09:14 -- common/autotest_common.sh@960 -- # wait 1827266 00:13:37.064 05:09:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:37.064 05:09:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:37.064 05:09:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:37.064 05:09:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.064 05:09:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.064 05:09:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.064 05:09:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.064 05:09:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.602 05:09:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.602 00:13:39.602 real 0m25.189s 00:13:39.602 user 1m21.670s 00:13:39.602 sys 0m4.243s 00:13:39.602 05:09:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.602 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:13:39.602 ************************************ 00:13:39.602 END TEST nvmf_rpc 00:13:39.602 ************************************ 00:13:39.602 05:09:16 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:39.602 05:09:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.602 05:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.602 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:13:39.602 ************************************ 00:13:39.602 START TEST nvmf_invalid 00:13:39.602 ************************************ 00:13:39.602 05:09:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:39.602 * Looking for test storage... 00:13:39.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.602 05:09:16 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.602 05:09:16 -- nvmf/common.sh@7 -- # uname -s 00:13:39.602 05:09:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.602 05:09:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.602 05:09:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.602 05:09:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.602 05:09:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.602 05:09:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.602 05:09:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.602 05:09:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.602 05:09:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.602 05:09:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.602 05:09:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.602 05:09:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.602 05:09:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.602 05:09:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.602 05:09:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.602 05:09:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.602 05:09:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.602 05:09:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.602 05:09:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.602 05:09:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.602 05:09:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.602 05:09:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.602 05:09:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.602 05:09:16 -- paths/export.sh@5 -- # export PATH 00:13:39.602 05:09:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.602 05:09:16 -- nvmf/common.sh@47 -- # : 0 00:13:39.602 05:09:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.602 05:09:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.602 05:09:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.602 05:09:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.602 05:09:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.602 05:09:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.602 05:09:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.602 05:09:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.602 05:09:16 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:39.602 05:09:16 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.602 05:09:16 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:39.602 05:09:16 -- target/invalid.sh@14 -- # target=foobar 00:13:39.602 05:09:16 -- target/invalid.sh@16 -- # RANDOM=0 00:13:39.602 05:09:16 -- target/invalid.sh@34 -- # nvmftestinit 00:13:39.602 05:09:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:39.602 05:09:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.602 05:09:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:39.602 05:09:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:39.602 05:09:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:39.602 05:09:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.602 05:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.602 05:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.602 05:09:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:39.602 05:09:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:39.602 05:09:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.602 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:13:41.554 05:09:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.554 05:09:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.554 05:09:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.554 05:09:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.554 05:09:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.554 05:09:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.554 05:09:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.554 05:09:18 -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.554 05:09:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.554 05:09:18 -- nvmf/common.sh@296 -- # e810=() 00:13:41.554 05:09:18 -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.554 05:09:18 -- nvmf/common.sh@297 -- # x722=() 00:13:41.554 05:09:18 -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.554 05:09:18 -- nvmf/common.sh@298 -- # mlx=() 00:13:41.554 05:09:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.554 05:09:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.554 05:09:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.554 05:09:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.554 05:09:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.554 05:09:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:41.554 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:41.554 05:09:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.554 05:09:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:41.554 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:41.554 05:09:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.554 05:09:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.554 05:09:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.554 05:09:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:41.554 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:41.554 05:09:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.554 05:09:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.554 05:09:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.554 05:09:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.554 05:09:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:41.554 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:41.554 05:09:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.554 05:09:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:41.554 05:09:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:41.554 05:09:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:41.554 05:09:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.554 05:09:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.554 05:09:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.554 05:09:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.554 05:09:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.554 05:09:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.554 05:09:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.554 05:09:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.554 05:09:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.554 05:09:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.554 05:09:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.554 05:09:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.554 05:09:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.554 05:09:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.554 05:09:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.554 05:09:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.554 05:09:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.554 05:09:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.554 05:09:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.554 05:09:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:41.555 00:13:41.555 --- 10.0.0.2 ping statistics --- 00:13:41.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.555 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:41.555 05:09:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:41.555 00:13:41.555 --- 10.0.0.1 ping statistics --- 00:13:41.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.555 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:41.555 05:09:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.555 05:09:18 -- nvmf/common.sh@411 -- # return 0 00:13:41.555 05:09:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:41.555 05:09:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.555 05:09:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:41.555 05:09:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:41.555 05:09:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.555 05:09:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:41.555 05:09:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:41.555 05:09:18 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:41.555 05:09:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:41.555 05:09:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:41.555 05:09:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.555 05:09:18 -- nvmf/common.sh@470 -- # nvmfpid=1832490 00:13:41.555 05:09:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.555 05:09:18 -- nvmf/common.sh@471 -- # waitforlisten 1832490 00:13:41.555 05:09:18 -- common/autotest_common.sh@817 -- # '[' -z 1832490 ']' 00:13:41.555 05:09:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.555 05:09:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.555 05:09:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.555 05:09:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.555 05:09:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.555 [2024-04-24 05:09:18.658553] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:13:41.555 [2024-04-24 05:09:18.658646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.555 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.555 [2024-04-24 05:09:18.698773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:41.555 [2024-04-24 05:09:18.730823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.555 [2024-04-24 05:09:18.823136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.555 [2024-04-24 05:09:18.823199] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.555 [2024-04-24 05:09:18.823216] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.555 [2024-04-24 05:09:18.823230] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.555 [2024-04-24 05:09:18.823242] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.555 [2024-04-24 05:09:18.823332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.555 [2024-04-24 05:09:18.823386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.555 [2024-04-24 05:09:18.823437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.555 [2024-04-24 05:09:18.823440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.813 05:09:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.813 05:09:18 -- common/autotest_common.sh@850 -- # return 0 00:13:41.813 05:09:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.813 05:09:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.813 05:09:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.813 05:09:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.813 05:09:18 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:41.813 05:09:18 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7508 00:13:42.072 [2024-04-24 05:09:19.253376] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:42.072 05:09:19 -- target/invalid.sh@40 -- # out='request: 00:13:42.072 { 00:13:42.072 "nqn": "nqn.2016-06.io.spdk:cnode7508", 00:13:42.072 "tgt_name": "foobar", 00:13:42.072 "method": "nvmf_create_subsystem", 00:13:42.072 "req_id": 1 00:13:42.072 } 00:13:42.072 Got JSON-RPC error response 00:13:42.072 response: 00:13:42.072 { 00:13:42.072 "code": -32603, 00:13:42.072 "message": "Unable to find target foobar" 00:13:42.072 }' 00:13:42.072 05:09:19 -- target/invalid.sh@41 -- # [[ request: 00:13:42.072 { 00:13:42.072 "nqn": "nqn.2016-06.io.spdk:cnode7508", 00:13:42.072 "tgt_name": "foobar", 00:13:42.072 "method": "nvmf_create_subsystem", 00:13:42.072 "req_id": 1 00:13:42.072 } 00:13:42.072 Got JSON-RPC error response 00:13:42.072 response: 00:13:42.072 { 00:13:42.072 "code": -32603, 00:13:42.072 "message": "Unable to find target foobar" 00:13:42.072 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:42.072 05:09:19 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:42.072 05:09:19 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23463 00:13:42.331 [2024-04-24 05:09:19.542317] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23463: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:42.331 05:09:19 -- target/invalid.sh@45 -- # out='request: 00:13:42.331 { 00:13:42.331 "nqn": "nqn.2016-06.io.spdk:cnode23463", 00:13:42.331 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:42.331 "method": "nvmf_create_subsystem", 00:13:42.331 "req_id": 1 00:13:42.331 } 00:13:42.331 Got JSON-RPC error response 00:13:42.331 response: 00:13:42.331 { 00:13:42.331 "code": -32602, 00:13:42.331 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:42.331 }' 00:13:42.331 05:09:19 -- target/invalid.sh@46 -- # [[ request: 00:13:42.331 { 00:13:42.331 "nqn": "nqn.2016-06.io.spdk:cnode23463", 00:13:42.331 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:42.331 "method": "nvmf_create_subsystem", 00:13:42.331 "req_id": 1 00:13:42.331 } 00:13:42.331 Got JSON-RPC error response 00:13:42.331 response: 00:13:42.331 { 00:13:42.331 "code": -32602, 00:13:42.331 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:42.331 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:42.331 05:09:19 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:42.331 05:09:19 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28035 00:13:42.590 [2024-04-24 05:09:19.803168] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28035: invalid model number 'SPDK_Controller' 00:13:42.590 05:09:19 -- target/invalid.sh@50 -- # out='request: 00:13:42.590 { 00:13:42.590 "nqn": "nqn.2016-06.io.spdk:cnode28035", 00:13:42.590 "model_number": "SPDK_Controller\u001f", 00:13:42.590 "method": "nvmf_create_subsystem", 00:13:42.590 "req_id": 1 00:13:42.590 } 00:13:42.590 Got JSON-RPC error response 00:13:42.590 response: 00:13:42.590 { 00:13:42.590 "code": -32602, 00:13:42.590 "message": "Invalid MN SPDK_Controller\u001f" 00:13:42.590 }' 00:13:42.590 05:09:19 -- target/invalid.sh@51 -- # [[ request: 00:13:42.590 { 00:13:42.590 "nqn": "nqn.2016-06.io.spdk:cnode28035", 00:13:42.590 "model_number": "SPDK_Controller\u001f", 00:13:42.590 "method": "nvmf_create_subsystem", 00:13:42.590 "req_id": 1 00:13:42.590 } 00:13:42.590 Got JSON-RPC error response 00:13:42.590 response: 00:13:42.590 { 00:13:42.590 "code": -32602, 00:13:42.590 "message": "Invalid MN SPDK_Controller\u001f" 00:13:42.590 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:42.590 05:09:19 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:42.590 05:09:19 -- target/invalid.sh@19 -- # local length=21 ll 00:13:42.590 05:09:19 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:42.590 05:09:19 -- target/invalid.sh@21 -- # local chars 00:13:42.590 05:09:19 -- target/invalid.sh@22 -- # local string 00:13:42.590 05:09:19 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:42.590 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # printf %x 60 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # string+='<' 00:13:42.590 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.590 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # printf %x 111 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:42.590 05:09:19 -- target/invalid.sh@25 -- # string+=o 00:13:42.590 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 75 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=K 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 92 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+='\' 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 75 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=K 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 62 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+='>' 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 112 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=p 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 47 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=/ 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 72 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=H 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 116 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=t 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 65 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+=A 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 34 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # string+='"' 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.591 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # printf %x 100 00:13:42.591 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=d 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 57 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=9 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 46 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=. 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 81 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=Q 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 51 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=3 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 106 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=j 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 87 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=W 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 56 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=8 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # printf %x 46 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:42.851 05:09:19 -- target/invalid.sh@25 -- # string+=. 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.851 05:09:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.851 05:09:19 -- target/invalid.sh@28 -- # [[ < == \- ]] 00:13:42.851 05:09:19 -- target/invalid.sh@31 -- # echo 'p/HtA"d9.Q3jW8.' 00:13:42.851 05:09:19 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p/HtA"d9.Q3jW8.' nqn.2016-06.io.spdk:cnode8243 00:13:42.851 [2024-04-24 05:09:20.100279] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8243: invalid serial number 'p/HtA"d9.Q3jW8.' 00:13:43.111 05:09:20 -- target/invalid.sh@54 -- # out='request: 00:13:43.111 { 00:13:43.111 "nqn": "nqn.2016-06.io.spdk:cnode8243", 00:13:43.111 "serial_number": "p/HtA\"d9.Q3jW8.", 00:13:43.111 "method": "nvmf_create_subsystem", 00:13:43.111 "req_id": 1 00:13:43.111 } 00:13:43.111 Got JSON-RPC error response 00:13:43.111 response: 00:13:43.111 { 00:13:43.111 "code": -32602, 00:13:43.111 "message": "Invalid SN p/HtA\"d9.Q3jW8." 00:13:43.111 }' 00:13:43.111 05:09:20 -- target/invalid.sh@55 -- # [[ request: 00:13:43.111 { 00:13:43.111 "nqn": "nqn.2016-06.io.spdk:cnode8243", 00:13:43.111 "serial_number": "p/HtA\"d9.Q3jW8.", 00:13:43.111 "method": "nvmf_create_subsystem", 00:13:43.111 "req_id": 1 00:13:43.111 } 00:13:43.111 Got JSON-RPC error response 00:13:43.111 response: 00:13:43.111 { 00:13:43.111 "code": -32602, 00:13:43.111 "message": "Invalid SN p/HtA\"d9.Q3jW8." 00:13:43.111 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:43.111 05:09:20 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:43.111 05:09:20 -- target/invalid.sh@19 -- # local length=41 ll 00:13:43.111 05:09:20 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:43.111 05:09:20 -- target/invalid.sh@21 -- # local chars 00:13:43.111 05:09:20 -- target/invalid.sh@22 -- # local string 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # printf %x 82 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # string+=R 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # printf %x 117 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # string+=u 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # printf %x 75 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # string+=K 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # printf %x 92 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # string+='\' 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.111 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # printf %x 90 00:13:43.111 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=Z 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 49 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=1 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 46 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=. 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 106 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=j 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 66 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=B 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 79 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=O 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 96 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='`' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 73 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=I 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 34 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='"' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 76 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=L 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 125 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='}' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 80 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=P 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 47 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=/ 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 35 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='#' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 99 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=c 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 52 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=4 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 34 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='"' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 97 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=a 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 109 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=m 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 43 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=+ 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 102 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=f 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 33 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='!' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 72 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=H 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 63 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='?' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 58 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=: 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 53 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=5 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 42 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='*' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 84 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=T 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 62 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='>' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 109 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=m 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 123 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+='{' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 69 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=E 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 116 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=t 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 41 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=')' 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.112 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # printf %x 53 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:43.112 05:09:20 -- target/invalid.sh@25 -- # string+=5 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # printf %x 98 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # string+=b 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # printf %x 100 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:43.113 05:09:20 -- target/invalid.sh@25 -- # string+=d 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.113 05:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.113 05:09:20 -- target/invalid.sh@28 -- # [[ R == \- ]] 00:13:43.113 05:09:20 -- target/invalid.sh@31 -- # echo 'RuK\Z1.jBO`I"L}P/#c4"am+f!H?:5*T>m{Et)5bd' 00:13:43.113 05:09:20 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'RuK\Z1.jBO`I"L}P/#c4"am+f!H?:5*T>m{Et)5bd' nqn.2016-06.io.spdk:cnode26122 00:13:43.371 [2024-04-24 05:09:20.493533] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26122: invalid model number 'RuK\Z1.jBO`I"L}P/#c4"am+f!H?:5*T>m{Et)5bd' 00:13:43.371 05:09:20 -- target/invalid.sh@58 -- # out='request: 00:13:43.371 { 00:13:43.371 "nqn": "nqn.2016-06.io.spdk:cnode26122", 00:13:43.371 "model_number": "RuK\\Z1.jBO`I\"L}P/#c4\"am+f!H?:5*T>m{Et)5bd", 00:13:43.371 "method": "nvmf_create_subsystem", 00:13:43.371 "req_id": 1 00:13:43.371 } 00:13:43.371 Got JSON-RPC error response 00:13:43.371 response: 00:13:43.371 { 00:13:43.371 "code": -32602, 00:13:43.371 "message": "Invalid MN RuK\\Z1.jBO`I\"L}P/#c4\"am+f!H?:5*T>m{Et)5bd" 00:13:43.371 }' 00:13:43.371 05:09:20 -- target/invalid.sh@59 -- # [[ request: 00:13:43.371 { 00:13:43.371 "nqn": "nqn.2016-06.io.spdk:cnode26122", 00:13:43.371 "model_number": "RuK\\Z1.jBO`I\"L}P/#c4\"am+f!H?:5*T>m{Et)5bd", 00:13:43.371 "method": "nvmf_create_subsystem", 00:13:43.371 "req_id": 1 00:13:43.371 } 00:13:43.371 Got JSON-RPC error response 00:13:43.371 response: 00:13:43.371 { 00:13:43.371 "code": -32602, 00:13:43.371 "message": "Invalid MN RuK\\Z1.jBO`I\"L}P/#c4\"am+f!H?:5*T>m{Et)5bd" 00:13:43.371 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:43.371 05:09:20 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:43.629 [2024-04-24 05:09:20.742454] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.629 05:09:20 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:43.887 05:09:21 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:43.887 05:09:21 -- target/invalid.sh@67 -- # echo '' 00:13:43.887 05:09:21 -- target/invalid.sh@67 -- # head -n 1 00:13:43.887 05:09:21 -- target/invalid.sh@67 -- # IP= 00:13:43.887 05:09:21 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:44.146 [2024-04-24 05:09:21.236075] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:44.146 05:09:21 -- target/invalid.sh@69 -- # out='request: 00:13:44.146 { 00:13:44.146 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:44.146 "listen_address": { 00:13:44.146 "trtype": "tcp", 00:13:44.146 "traddr": "", 00:13:44.146 "trsvcid": "4421" 00:13:44.146 }, 00:13:44.146 "method": "nvmf_subsystem_remove_listener", 00:13:44.146 "req_id": 1 00:13:44.146 } 00:13:44.146 Got JSON-RPC error response 00:13:44.146 response: 00:13:44.146 { 00:13:44.146 "code": -32602, 00:13:44.146 "message": "Invalid parameters" 00:13:44.146 }' 00:13:44.146 05:09:21 -- target/invalid.sh@70 -- # [[ request: 00:13:44.146 { 00:13:44.146 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:44.146 "listen_address": { 00:13:44.146 "trtype": "tcp", 00:13:44.146 "traddr": "", 00:13:44.146 "trsvcid": "4421" 00:13:44.146 }, 00:13:44.146 "method": "nvmf_subsystem_remove_listener", 00:13:44.146 "req_id": 1 00:13:44.146 } 00:13:44.146 Got JSON-RPC error response 00:13:44.146 response: 00:13:44.146 { 00:13:44.146 "code": -32602, 00:13:44.146 "message": "Invalid parameters" 00:13:44.146 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:44.146 05:09:21 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28533 -i 0 00:13:44.404 [2024-04-24 05:09:21.480850] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28533: invalid cntlid range [0-65519] 00:13:44.404 05:09:21 -- target/invalid.sh@73 -- # out='request: 00:13:44.404 { 00:13:44.404 "nqn": "nqn.2016-06.io.spdk:cnode28533", 00:13:44.404 "min_cntlid": 0, 00:13:44.404 "method": "nvmf_create_subsystem", 00:13:44.404 "req_id": 1 00:13:44.404 } 00:13:44.404 Got JSON-RPC error response 00:13:44.404 response: 00:13:44.404 { 00:13:44.404 "code": -32602, 00:13:44.404 "message": "Invalid cntlid range [0-65519]" 00:13:44.404 }' 00:13:44.404 05:09:21 -- target/invalid.sh@74 -- # [[ request: 00:13:44.404 { 00:13:44.404 "nqn": "nqn.2016-06.io.spdk:cnode28533", 00:13:44.404 "min_cntlid": 0, 00:13:44.404 "method": "nvmf_create_subsystem", 00:13:44.404 "req_id": 1 00:13:44.404 } 00:13:44.404 Got JSON-RPC error response 00:13:44.404 response: 00:13:44.404 { 00:13:44.404 "code": -32602, 00:13:44.404 "message": "Invalid cntlid range [0-65519]" 00:13:44.404 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:44.404 05:09:21 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17398 -i 65520 00:13:44.662 [2024-04-24 05:09:21.725665] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17398: invalid cntlid range [65520-65519] 00:13:44.662 05:09:21 -- target/invalid.sh@75 -- # out='request: 00:13:44.662 { 00:13:44.662 "nqn": "nqn.2016-06.io.spdk:cnode17398", 00:13:44.662 "min_cntlid": 65520, 00:13:44.662 "method": "nvmf_create_subsystem", 00:13:44.662 "req_id": 1 00:13:44.662 } 00:13:44.662 Got JSON-RPC error response 00:13:44.662 response: 00:13:44.662 { 00:13:44.662 "code": -32602, 00:13:44.662 "message": "Invalid cntlid range [65520-65519]" 00:13:44.662 }' 00:13:44.662 05:09:21 -- target/invalid.sh@76 -- # [[ request: 00:13:44.662 { 00:13:44.662 "nqn": "nqn.2016-06.io.spdk:cnode17398", 00:13:44.662 "min_cntlid": 65520, 00:13:44.662 "method": "nvmf_create_subsystem", 00:13:44.662 "req_id": 1 00:13:44.662 } 00:13:44.662 Got JSON-RPC error response 00:13:44.662 response: 00:13:44.662 { 00:13:44.662 "code": -32602, 00:13:44.662 "message": "Invalid cntlid range [65520-65519]" 00:13:44.662 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:44.662 05:09:21 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23860 -I 0 00:13:44.920 [2024-04-24 05:09:21.982532] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23860: invalid cntlid range [1-0] 00:13:44.920 05:09:22 -- target/invalid.sh@77 -- # out='request: 00:13:44.920 { 00:13:44.920 "nqn": "nqn.2016-06.io.spdk:cnode23860", 00:13:44.920 "max_cntlid": 0, 00:13:44.920 "method": "nvmf_create_subsystem", 00:13:44.920 "req_id": 1 00:13:44.920 } 00:13:44.920 Got JSON-RPC error response 00:13:44.920 response: 00:13:44.920 { 00:13:44.920 "code": -32602, 00:13:44.920 "message": "Invalid cntlid range [1-0]" 00:13:44.920 }' 00:13:44.920 05:09:22 -- target/invalid.sh@78 -- # [[ request: 00:13:44.920 { 00:13:44.920 "nqn": "nqn.2016-06.io.spdk:cnode23860", 00:13:44.920 "max_cntlid": 0, 00:13:44.920 "method": "nvmf_create_subsystem", 00:13:44.920 "req_id": 1 00:13:44.920 } 00:13:44.920 Got JSON-RPC error response 00:13:44.920 response: 00:13:44.920 { 00:13:44.920 "code": -32602, 00:13:44.920 "message": "Invalid cntlid range [1-0]" 00:13:44.920 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:44.920 05:09:22 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17352 -I 65520 00:13:45.178 [2024-04-24 05:09:22.219359] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17352: invalid cntlid range [1-65520] 00:13:45.178 05:09:22 -- target/invalid.sh@79 -- # out='request: 00:13:45.178 { 00:13:45.178 "nqn": "nqn.2016-06.io.spdk:cnode17352", 00:13:45.178 "max_cntlid": 65520, 00:13:45.178 "method": "nvmf_create_subsystem", 00:13:45.178 "req_id": 1 00:13:45.178 } 00:13:45.178 Got JSON-RPC error response 00:13:45.178 response: 00:13:45.178 { 00:13:45.178 "code": -32602, 00:13:45.178 "message": "Invalid cntlid range [1-65520]" 00:13:45.178 }' 00:13:45.178 05:09:22 -- target/invalid.sh@80 -- # [[ request: 00:13:45.178 { 00:13:45.178 "nqn": "nqn.2016-06.io.spdk:cnode17352", 00:13:45.178 "max_cntlid": 65520, 00:13:45.178 "method": "nvmf_create_subsystem", 00:13:45.178 "req_id": 1 00:13:45.178 } 00:13:45.178 Got JSON-RPC error response 00:13:45.178 response: 00:13:45.178 { 00:13:45.178 "code": -32602, 00:13:45.178 "message": "Invalid cntlid range [1-65520]" 00:13:45.178 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.178 05:09:22 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19346 -i 6 -I 5 00:13:45.436 [2024-04-24 05:09:22.464181] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19346: invalid cntlid range [6-5] 00:13:45.436 05:09:22 -- target/invalid.sh@83 -- # out='request: 00:13:45.436 { 00:13:45.436 "nqn": "nqn.2016-06.io.spdk:cnode19346", 00:13:45.436 "min_cntlid": 6, 00:13:45.436 "max_cntlid": 5, 00:13:45.436 "method": "nvmf_create_subsystem", 00:13:45.436 "req_id": 1 00:13:45.436 } 00:13:45.436 Got JSON-RPC error response 00:13:45.436 response: 00:13:45.436 { 00:13:45.436 "code": -32602, 00:13:45.436 "message": "Invalid cntlid range [6-5]" 00:13:45.436 }' 00:13:45.436 05:09:22 -- target/invalid.sh@84 -- # [[ request: 00:13:45.436 { 00:13:45.436 "nqn": "nqn.2016-06.io.spdk:cnode19346", 00:13:45.436 "min_cntlid": 6, 00:13:45.437 "max_cntlid": 5, 00:13:45.437 "method": "nvmf_create_subsystem", 00:13:45.437 "req_id": 1 00:13:45.437 } 00:13:45.437 Got JSON-RPC error response 00:13:45.437 response: 00:13:45.437 { 00:13:45.437 "code": -32602, 00:13:45.437 "message": "Invalid cntlid range [6-5]" 00:13:45.437 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.437 05:09:22 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:45.437 05:09:22 -- target/invalid.sh@87 -- # out='request: 00:13:45.437 { 00:13:45.437 "name": "foobar", 00:13:45.437 "method": "nvmf_delete_target", 00:13:45.437 "req_id": 1 00:13:45.437 } 00:13:45.437 Got JSON-RPC error response 00:13:45.437 response: 00:13:45.437 { 00:13:45.437 "code": -32602, 00:13:45.437 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:45.437 }' 00:13:45.437 05:09:22 -- target/invalid.sh@88 -- # [[ request: 00:13:45.437 { 00:13:45.437 "name": "foobar", 00:13:45.437 "method": "nvmf_delete_target", 00:13:45.437 "req_id": 1 00:13:45.437 } 00:13:45.437 Got JSON-RPC error response 00:13:45.437 response: 00:13:45.437 { 00:13:45.437 "code": -32602, 00:13:45.437 "message": "The specified target doesn't exist, cannot delete it." 00:13:45.437 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:45.437 05:09:22 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:45.437 05:09:22 -- target/invalid.sh@91 -- # nvmftestfini 00:13:45.437 05:09:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:45.437 05:09:22 -- nvmf/common.sh@117 -- # sync 00:13:45.437 05:09:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.437 05:09:22 -- nvmf/common.sh@120 -- # set +e 00:13:45.437 05:09:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.437 05:09:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.437 rmmod nvme_tcp 00:13:45.437 rmmod nvme_fabrics 00:13:45.437 rmmod nvme_keyring 00:13:45.437 05:09:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.437 05:09:22 -- nvmf/common.sh@124 -- # set -e 00:13:45.437 05:09:22 -- nvmf/common.sh@125 -- # return 0 00:13:45.437 05:09:22 -- nvmf/common.sh@478 -- # '[' -n 1832490 ']' 00:13:45.437 05:09:22 -- nvmf/common.sh@479 -- # killprocess 1832490 00:13:45.437 05:09:22 -- common/autotest_common.sh@936 -- # '[' -z 1832490 ']' 00:13:45.437 05:09:22 -- common/autotest_common.sh@940 -- # kill -0 1832490 00:13:45.437 05:09:22 -- common/autotest_common.sh@941 -- # uname 00:13:45.437 05:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.437 05:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1832490 00:13:45.437 05:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:45.437 05:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:45.437 05:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1832490' 00:13:45.437 killing process with pid 1832490 00:13:45.437 05:09:22 -- common/autotest_common.sh@955 -- # kill 1832490 00:13:45.437 05:09:22 -- common/autotest_common.sh@960 -- # wait 1832490 00:13:45.695 05:09:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:45.695 05:09:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:45.695 05:09:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:45.695 05:09:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.695 05:09:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.695 05:09:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.695 05:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.695 05:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.240 05:09:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.240 00:13:48.240 real 0m8.509s 00:13:48.240 user 0m19.853s 00:13:48.240 sys 0m2.377s 00:13:48.240 05:09:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.240 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:48.240 ************************************ 00:13:48.240 END TEST nvmf_invalid 00:13:48.240 ************************************ 00:13:48.240 05:09:24 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:48.240 05:09:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.240 05:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.240 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:48.240 ************************************ 00:13:48.240 START TEST nvmf_abort 00:13:48.240 ************************************ 00:13:48.240 05:09:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:48.240 * Looking for test storage... 00:13:48.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.240 05:09:25 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.240 05:09:25 -- nvmf/common.sh@7 -- # uname -s 00:13:48.240 05:09:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.240 05:09:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.240 05:09:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.240 05:09:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.240 05:09:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.240 05:09:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.240 05:09:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.240 05:09:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.240 05:09:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.240 05:09:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.240 05:09:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.240 05:09:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.240 05:09:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.240 05:09:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.240 05:09:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.240 05:09:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.240 05:09:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.240 05:09:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.240 05:09:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.240 05:09:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.240 05:09:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.240 05:09:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.240 05:09:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.240 05:09:25 -- paths/export.sh@5 -- # export PATH 00:13:48.240 05:09:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.240 05:09:25 -- nvmf/common.sh@47 -- # : 0 00:13:48.240 05:09:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.240 05:09:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.240 05:09:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.240 05:09:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.240 05:09:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.240 05:09:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.240 05:09:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.240 05:09:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.240 05:09:25 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.240 05:09:25 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:48.241 05:09:25 -- target/abort.sh@14 -- # nvmftestinit 00:13:48.241 05:09:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:48.241 05:09:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.241 05:09:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:48.241 05:09:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:48.241 05:09:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:48.241 05:09:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.241 05:09:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.241 05:09:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.241 05:09:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:48.241 05:09:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:48.241 05:09:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.241 05:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:50.147 05:09:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:50.147 05:09:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.147 05:09:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.147 05:09:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.147 05:09:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.147 05:09:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.147 05:09:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.147 05:09:27 -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.147 05:09:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.147 05:09:27 -- nvmf/common.sh@296 -- # e810=() 00:13:50.147 05:09:27 -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.147 05:09:27 -- nvmf/common.sh@297 -- # x722=() 00:13:50.147 05:09:27 -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.147 05:09:27 -- nvmf/common.sh@298 -- # mlx=() 00:13:50.147 05:09:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.147 05:09:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.147 05:09:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.147 05:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.147 05:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.147 05:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.147 05:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.147 05:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.147 05:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.147 05:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.147 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.147 05:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.147 05:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.147 05:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.147 05:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.147 05:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:50.147 05:09:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:50.147 05:09:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.147 05:09:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.147 05:09:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.147 05:09:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.147 05:09:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.147 05:09:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.147 05:09:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.147 05:09:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.147 05:09:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.147 05:09:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.147 05:09:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.147 05:09:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.147 05:09:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.147 05:09:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.147 05:09:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.147 05:09:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.147 05:09:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.147 05:09:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.147 05:09:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:13:50.147 00:13:50.147 --- 10.0.0.2 ping statistics --- 00:13:50.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.147 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:50.147 05:09:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:13:50.147 00:13:50.147 --- 10.0.0.1 ping statistics --- 00:13:50.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.147 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:50.147 05:09:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.147 05:09:27 -- nvmf/common.sh@411 -- # return 0 00:13:50.147 05:09:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:50.147 05:09:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.147 05:09:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:50.147 05:09:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.147 05:09:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:50.147 05:09:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:50.147 05:09:27 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:50.147 05:09:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:50.147 05:09:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:50.147 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.147 05:09:27 -- nvmf/common.sh@470 -- # nvmfpid=1835012 00:13:50.147 05:09:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:50.147 05:09:27 -- nvmf/common.sh@471 -- # waitforlisten 1835012 00:13:50.147 05:09:27 -- common/autotest_common.sh@817 -- # '[' -z 1835012 ']' 00:13:50.147 05:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.147 05:09:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:50.147 05:09:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.147 05:09:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:50.147 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.147 [2024-04-24 05:09:27.249998] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:13:50.147 [2024-04-24 05:09:27.250079] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.147 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.147 [2024-04-24 05:09:27.288417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:50.147 [2024-04-24 05:09:27.320347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.147 [2024-04-24 05:09:27.412051] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.147 [2024-04-24 05:09:27.412119] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.148 [2024-04-24 05:09:27.412135] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.148 [2024-04-24 05:09:27.412149] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.148 [2024-04-24 05:09:27.412162] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.148 [2024-04-24 05:09:27.412261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.148 [2024-04-24 05:09:27.412316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.148 [2024-04-24 05:09:27.412319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.407 05:09:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:50.407 05:09:27 -- common/autotest_common.sh@850 -- # return 0 00:13:50.407 05:09:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:50.407 05:09:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 05:09:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.407 05:09:27 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 [2024-04-24 05:09:27.556375] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.407 05:09:27 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 Malloc0 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.407 05:09:27 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 Delay0 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.407 05:09:27 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.407 05:09:27 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.407 05:09:27 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:50.407 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.407 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.407 [2024-04-24 05:09:27.622894] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.407 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.408 05:09:27 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.408 05:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.408 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.408 05:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.408 05:09:27 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:50.408 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.666 [2024-04-24 05:09:27.728080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:52.568 Initializing NVMe Controllers 00:13:52.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:52.568 controller IO queue size 128 less than required 00:13:52.568 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:52.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:52.568 Initialization complete. Launching workers. 00:13:52.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33791 00:13:52.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33852, failed to submit 62 00:13:52.568 success 33795, unsuccess 57, failed 0 00:13:52.568 05:09:29 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:52.568 05:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.568 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.568 05:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.568 05:09:29 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:52.568 05:09:29 -- target/abort.sh@38 -- # nvmftestfini 00:13:52.568 05:09:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:52.568 05:09:29 -- nvmf/common.sh@117 -- # sync 00:13:52.568 05:09:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.568 05:09:29 -- nvmf/common.sh@120 -- # set +e 00:13:52.568 05:09:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.568 05:09:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.568 rmmod nvme_tcp 00:13:52.568 rmmod nvme_fabrics 00:13:52.828 rmmod nvme_keyring 00:13:52.828 05:09:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.828 05:09:29 -- nvmf/common.sh@124 -- # set -e 00:13:52.828 05:09:29 -- nvmf/common.sh@125 -- # return 0 00:13:52.828 05:09:29 -- nvmf/common.sh@478 -- # '[' -n 1835012 ']' 00:13:52.828 05:09:29 -- nvmf/common.sh@479 -- # killprocess 1835012 00:13:52.828 05:09:29 -- common/autotest_common.sh@936 -- # '[' -z 1835012 ']' 00:13:52.828 05:09:29 -- common/autotest_common.sh@940 -- # kill -0 1835012 00:13:52.828 05:09:29 -- common/autotest_common.sh@941 -- # uname 00:13:52.828 05:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.828 05:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1835012 00:13:52.828 05:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:52.828 05:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:52.828 05:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1835012' 00:13:52.828 killing process with pid 1835012 00:13:52.828 05:09:29 -- common/autotest_common.sh@955 -- # kill 1835012 00:13:52.828 05:09:29 -- common/autotest_common.sh@960 -- # wait 1835012 00:13:53.089 05:09:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:53.089 05:09:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:53.089 05:09:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:53.089 05:09:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.089 05:09:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.089 05:09:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.089 05:09:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.089 05:09:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.997 05:09:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.997 00:13:54.997 real 0m7.106s 00:13:54.997 user 0m10.327s 00:13:54.997 sys 0m2.467s 00:13:54.997 05:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.997 05:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:54.997 ************************************ 00:13:54.997 END TEST nvmf_abort 00:13:54.997 ************************************ 00:13:54.997 05:09:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:54.997 05:09:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:54.997 05:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.997 05:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:55.266 ************************************ 00:13:55.267 START TEST nvmf_ns_hotplug_stress 00:13:55.267 ************************************ 00:13:55.267 05:09:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:55.267 * Looking for test storage... 00:13:55.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.267 05:09:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.267 05:09:32 -- nvmf/common.sh@7 -- # uname -s 00:13:55.267 05:09:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.267 05:09:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.267 05:09:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.267 05:09:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.267 05:09:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.267 05:09:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.267 05:09:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.267 05:09:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.267 05:09:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.267 05:09:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.267 05:09:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.267 05:09:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.267 05:09:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.267 05:09:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.267 05:09:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.267 05:09:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.267 05:09:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.267 05:09:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.267 05:09:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.267 05:09:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.267 05:09:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.267 05:09:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.267 05:09:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.267 05:09:32 -- paths/export.sh@5 -- # export PATH 00:13:55.267 05:09:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.267 05:09:32 -- nvmf/common.sh@47 -- # : 0 00:13:55.267 05:09:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.267 05:09:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.267 05:09:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.267 05:09:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.267 05:09:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.267 05:09:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.267 05:09:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.267 05:09:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.267 05:09:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.267 05:09:32 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:55.267 05:09:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:55.267 05:09:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.267 05:09:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:55.267 05:09:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:55.267 05:09:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:55.267 05:09:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.267 05:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.267 05:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.267 05:09:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:55.267 05:09:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:55.267 05:09:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.267 05:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:57.213 05:09:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:57.213 05:09:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.213 05:09:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.213 05:09:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.213 05:09:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.213 05:09:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.213 05:09:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.213 05:09:34 -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.213 05:09:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.213 05:09:34 -- nvmf/common.sh@296 -- # e810=() 00:13:57.213 05:09:34 -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.213 05:09:34 -- nvmf/common.sh@297 -- # x722=() 00:13:57.213 05:09:34 -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.213 05:09:34 -- nvmf/common.sh@298 -- # mlx=() 00:13:57.213 05:09:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.213 05:09:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.213 05:09:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.213 05:09:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.213 05:09:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.213 05:09:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.213 05:09:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:57.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:57.213 05:09:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.213 05:09:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:57.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:57.213 05:09:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.213 05:09:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.214 05:09:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.214 05:09:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.214 05:09:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:57.214 05:09:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.214 05:09:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:57.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:57.214 05:09:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.214 05:09:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.214 05:09:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.214 05:09:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:57.214 05:09:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.214 05:09:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:57.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:57.214 05:09:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.214 05:09:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:57.214 05:09:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:57.214 05:09:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:57.214 05:09:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.214 05:09:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.214 05:09:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.214 05:09:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.214 05:09:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.214 05:09:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.214 05:09:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.214 05:09:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.214 05:09:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.214 05:09:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.214 05:09:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.214 05:09:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.214 05:09:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.214 05:09:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.214 05:09:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.214 05:09:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.214 05:09:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.214 05:09:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.214 05:09:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.214 05:09:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:57.214 00:13:57.214 --- 10.0.0.2 ping statistics --- 00:13:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.214 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:57.214 05:09:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:13:57.214 00:13:57.214 --- 10.0.0.1 ping statistics --- 00:13:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.214 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:57.214 05:09:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.214 05:09:34 -- nvmf/common.sh@411 -- # return 0 00:13:57.214 05:09:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:57.214 05:09:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.214 05:09:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:57.214 05:09:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.214 05:09:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:57.214 05:09:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:57.214 05:09:34 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:57.214 05:09:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:57.214 05:09:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:57.214 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 05:09:34 -- nvmf/common.sh@470 -- # nvmfpid=1837351 00:13:57.214 05:09:34 -- nvmf/common.sh@471 -- # waitforlisten 1837351 00:13:57.214 05:09:34 -- common/autotest_common.sh@817 -- # '[' -z 1837351 ']' 00:13:57.214 05:09:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.214 05:09:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.214 05:09:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:57.214 05:09:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.214 05:09:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:57.214 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:57.474 [2024-04-24 05:09:34.513720] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:13:57.474 [2024-04-24 05:09:34.513813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.474 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.474 [2024-04-24 05:09:34.551934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:57.474 [2024-04-24 05:09:34.585474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.474 [2024-04-24 05:09:34.677226] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.474 [2024-04-24 05:09:34.677284] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.474 [2024-04-24 05:09:34.677301] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.474 [2024-04-24 05:09:34.677315] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.474 [2024-04-24 05:09:34.677328] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.474 [2024-04-24 05:09:34.677402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.474 [2024-04-24 05:09:34.677456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.474 [2024-04-24 05:09:34.677460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.732 05:09:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:57.732 05:09:34 -- common/autotest_common.sh@850 -- # return 0 00:13:57.732 05:09:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:57.732 05:09:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:57.732 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:57.732 05:09:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.732 05:09:34 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:57.732 05:09:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:57.989 [2024-04-24 05:09:35.058894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.989 05:09:35 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.247 05:09:35 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.505 [2024-04-24 05:09:35.549537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.505 05:09:35 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.762 05:09:35 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:59.021 Malloc0 00:13:59.021 05:09:36 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:59.021 Delay0 00:13:59.279 05:09:36 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.279 05:09:36 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:59.537 NULL1 00:13:59.537 05:09:36 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:59.794 05:09:37 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1837657 00:13:59.794 05:09:37 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:59.794 05:09:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:13:59.794 05:09:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.053 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.988 Read completed with error (sct=0, sc=11) 00:14:00.988 05:09:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.245 05:09:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:14:01.245 05:09:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:01.503 true 00:14:01.503 05:09:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:01.503 05:09:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.436 05:09:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.694 05:09:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:14:02.694 05:09:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:02.952 true 00:14:02.952 05:09:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:02.952 05:09:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.210 05:09:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.468 05:09:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:14:03.468 05:09:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:03.726 true 00:14:03.726 05:09:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:03.726 05:09:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.984 05:09:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.243 05:09:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:14:04.243 05:09:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:04.501 true 00:14:04.501 05:09:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:04.501 05:09:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.436 05:09:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.694 05:09:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:14:05.694 05:09:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:05.952 true 00:14:05.952 05:09:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:05.952 05:09:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.210 05:09:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.467 05:09:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:14:06.467 05:09:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:06.725 true 00:14:06.725 05:09:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:06.725 05:09:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.659 05:09:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.659 05:09:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:14:07.659 05:09:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:07.916 true 00:14:07.916 05:09:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:07.916 05:09:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.173 05:09:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.430 05:09:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:14:08.430 05:09:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:08.687 true 00:14:08.687 05:09:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:08.687 05:09:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.625 05:09:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.884 05:09:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:14:09.884 05:09:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:10.186 true 00:14:10.186 05:09:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:10.186 05:09:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.445 05:09:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.703 05:09:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:14:10.703 05:09:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:10.703 true 00:14:10.963 05:09:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:10.963 05:09:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.899 05:09:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.899 05:09:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:14:11.899 05:09:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:12.157 true 00:14:12.157 05:09:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:12.157 05:09:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.414 05:09:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.672 05:09:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:14:12.672 05:09:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:12.930 true 00:14:12.930 05:09:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:12.930 05:09:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.866 05:09:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.124 05:09:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:14:14.124 05:09:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:14.382 true 00:14:14.382 05:09:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:14.382 05:09:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.639 05:09:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.896 05:09:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:14:14.896 05:09:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:15.154 true 00:14:15.154 05:09:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:15.154 05:09:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.412 05:09:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.670 05:09:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:14:15.670 05:09:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:15.928 true 00:14:15.928 05:09:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:15.928 05:09:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.865 05:09:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.123 05:09:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:14:17.123 05:09:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:17.381 true 00:14:17.381 05:09:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:17.381 05:09:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.318 05:09:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.576 05:09:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:14:18.576 05:09:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:18.835 true 00:14:18.835 05:09:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:18.835 05:09:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.095 05:09:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.095 05:09:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:14:19.095 05:09:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:19.352 true 00:14:19.353 05:09:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:19.353 05:09:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.286 05:09:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.544 05:09:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:14:20.544 05:09:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:20.802 true 00:14:20.802 05:09:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:20.802 05:09:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.060 05:09:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.318 05:09:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:14:21.318 05:09:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:21.576 true 00:14:21.576 05:09:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:21.576 05:09:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.834 05:09:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.091 05:09:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:22.091 05:09:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:22.349 true 00:14:22.349 05:09:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:22.349 05:09:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.282 05:10:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.583 05:10:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:23.583 05:10:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:23.843 true 00:14:23.843 05:10:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:23.843 05:10:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.781 05:10:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:25.044 05:10:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:25.044 05:10:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:25.044 true 00:14:25.044 05:10:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:25.044 05:10:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.302 05:10:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.559 05:10:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:25.559 05:10:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:25.817 true 00:14:25.817 05:10:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:25.817 05:10:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.754 05:10:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.011 05:10:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:27.011 05:10:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:27.269 true 00:14:27.269 05:10:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:27.269 05:10:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.526 05:10:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.784 05:10:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:27.784 05:10:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:28.042 true 00:14:28.042 05:10:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:28.042 05:10:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.980 05:10:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:29.238 05:10:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:29.238 05:10:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:29.238 true 00:14:29.497 05:10:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:29.497 05:10:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.497 05:10:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.755 05:10:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:29.755 05:10:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:30.013 true 00:14:30.013 05:10:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:30.013 05:10:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.390 Initializing NVMe Controllers 00:14:31.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.390 Controller IO queue size 128, less than required. 00:14:31.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.390 Controller IO queue size 128, less than required. 00:14:31.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:31.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:31.390 Initialization complete. Launching workers. 00:14:31.390 ======================================================== 00:14:31.390 Latency(us) 00:14:31.390 Device Information : IOPS MiB/s Average min max 00:14:31.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 986.71 0.48 74097.66 3246.66 1081871.72 00:14:31.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11036.12 5.39 11599.40 3726.17 445984.11 00:14:31.390 ======================================================== 00:14:31.390 Total : 12022.83 5.87 16728.60 3246.66 1081871.72 00:14:31.390 00:14:31.390 05:10:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.390 05:10:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:14:31.390 05:10:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:31.651 true 00:14:31.651 05:10:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1837657 00:14:31.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1837657) - No such process 00:14:31.651 05:10:08 -- target/ns_hotplug_stress.sh@44 -- # wait 1837657 00:14:31.651 05:10:08 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:31.651 05:10:08 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:31.651 05:10:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:31.651 05:10:08 -- nvmf/common.sh@117 -- # sync 00:14:31.651 05:10:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.651 05:10:08 -- nvmf/common.sh@120 -- # set +e 00:14:31.651 05:10:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.651 05:10:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.651 rmmod nvme_tcp 00:14:31.651 rmmod nvme_fabrics 00:14:31.651 rmmod nvme_keyring 00:14:31.651 05:10:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.651 05:10:08 -- nvmf/common.sh@124 -- # set -e 00:14:31.651 05:10:08 -- nvmf/common.sh@125 -- # return 0 00:14:31.651 05:10:08 -- nvmf/common.sh@478 -- # '[' -n 1837351 ']' 00:14:31.651 05:10:08 -- nvmf/common.sh@479 -- # killprocess 1837351 00:14:31.651 05:10:08 -- common/autotest_common.sh@936 -- # '[' -z 1837351 ']' 00:14:31.651 05:10:08 -- common/autotest_common.sh@940 -- # kill -0 1837351 00:14:31.651 05:10:08 -- common/autotest_common.sh@941 -- # uname 00:14:31.651 05:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.651 05:10:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1837351 00:14:31.651 05:10:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:31.651 05:10:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:31.651 05:10:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1837351' 00:14:31.651 killing process with pid 1837351 00:14:31.651 05:10:08 -- common/autotest_common.sh@955 -- # kill 1837351 00:14:31.651 05:10:08 -- common/autotest_common.sh@960 -- # wait 1837351 00:14:31.911 05:10:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:31.911 05:10:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:31.911 05:10:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:31.911 05:10:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.911 05:10:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.911 05:10:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.911 05:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.911 05:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.818 05:10:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.818 00:14:33.818 real 0m38.779s 00:14:33.818 user 2m25.722s 00:14:33.818 sys 0m12.305s 00:14:33.818 05:10:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.818 05:10:11 -- common/autotest_common.sh@10 -- # set +x 00:14:33.818 ************************************ 00:14:33.818 END TEST nvmf_ns_hotplug_stress 00:14:33.818 ************************************ 00:14:34.076 05:10:11 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.076 05:10:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.076 05:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.076 05:10:11 -- common/autotest_common.sh@10 -- # set +x 00:14:34.076 ************************************ 00:14:34.076 START TEST nvmf_connect_stress 00:14:34.076 ************************************ 00:14:34.076 05:10:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.076 * Looking for test storage... 00:14:34.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.076 05:10:11 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.076 05:10:11 -- nvmf/common.sh@7 -- # uname -s 00:14:34.076 05:10:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.076 05:10:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.076 05:10:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.076 05:10:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.076 05:10:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.076 05:10:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.076 05:10:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.076 05:10:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.076 05:10:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.076 05:10:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.076 05:10:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.076 05:10:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.076 05:10:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.076 05:10:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.076 05:10:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.076 05:10:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.076 05:10:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.076 05:10:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.076 05:10:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.076 05:10:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.076 05:10:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.076 05:10:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.076 05:10:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.076 05:10:11 -- paths/export.sh@5 -- # export PATH 00:14:34.076 05:10:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.076 05:10:11 -- nvmf/common.sh@47 -- # : 0 00:14:34.076 05:10:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.076 05:10:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.076 05:10:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.076 05:10:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.076 05:10:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.076 05:10:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.076 05:10:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.076 05:10:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.076 05:10:11 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:34.076 05:10:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:34.076 05:10:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.076 05:10:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:34.076 05:10:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:34.076 05:10:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:34.076 05:10:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.076 05:10:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.076 05:10:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.076 05:10:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:34.076 05:10:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:34.076 05:10:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.076 05:10:11 -- common/autotest_common.sh@10 -- # set +x 00:14:35.982 05:10:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:35.982 05:10:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.983 05:10:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.983 05:10:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.983 05:10:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.983 05:10:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.983 05:10:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.983 05:10:13 -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.983 05:10:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.983 05:10:13 -- nvmf/common.sh@296 -- # e810=() 00:14:35.983 05:10:13 -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.983 05:10:13 -- nvmf/common.sh@297 -- # x722=() 00:14:35.983 05:10:13 -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.983 05:10:13 -- nvmf/common.sh@298 -- # mlx=() 00:14:35.983 05:10:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.983 05:10:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.983 05:10:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.983 05:10:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.983 05:10:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.983 05:10:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.983 05:10:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.983 05:10:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.983 05:10:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.983 05:10:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.983 05:10:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.983 05:10:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.983 05:10:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.983 05:10:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.983 05:10:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.983 05:10:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.983 05:10:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.983 05:10:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.983 05:10:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:35.983 05:10:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:35.983 05:10:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:35.983 05:10:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.983 05:10:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.983 05:10:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.983 05:10:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.983 05:10:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.983 05:10:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.983 05:10:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.983 05:10:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.983 05:10:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.983 05:10:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.983 05:10:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.983 05:10:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.983 05:10:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.242 05:10:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.242 05:10:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.242 05:10:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.242 05:10:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.242 05:10:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.242 05:10:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.242 05:10:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:36.242 00:14:36.242 --- 10.0.0.2 ping statistics --- 00:14:36.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.242 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:36.242 05:10:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:14:36.242 00:14:36.242 --- 10.0.0.1 ping statistics --- 00:14:36.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.242 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:36.242 05:10:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.242 05:10:13 -- nvmf/common.sh@411 -- # return 0 00:14:36.242 05:10:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:36.242 05:10:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.242 05:10:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:36.242 05:10:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:36.242 05:10:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.242 05:10:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:36.242 05:10:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:36.242 05:10:13 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:36.242 05:10:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.242 05:10:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.242 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.242 05:10:13 -- nvmf/common.sh@470 -- # nvmfpid=1843379 00:14:36.242 05:10:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:36.242 05:10:13 -- nvmf/common.sh@471 -- # waitforlisten 1843379 00:14:36.242 05:10:13 -- common/autotest_common.sh@817 -- # '[' -z 1843379 ']' 00:14:36.242 05:10:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.242 05:10:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.242 05:10:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.242 05:10:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.242 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.242 [2024-04-24 05:10:13.406557] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:14:36.242 [2024-04-24 05:10:13.406645] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.242 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.242 [2024-04-24 05:10:13.444149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:36.242 [2024-04-24 05:10:13.476127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:36.501 [2024-04-24 05:10:13.565284] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.501 [2024-04-24 05:10:13.565373] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.501 [2024-04-24 05:10:13.565390] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.501 [2024-04-24 05:10:13.565404] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.501 [2024-04-24 05:10:13.565416] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.501 [2024-04-24 05:10:13.565527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.501 [2024-04-24 05:10:13.565655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.501 [2024-04-24 05:10:13.565660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.501 05:10:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.501 05:10:13 -- common/autotest_common.sh@850 -- # return 0 00:14:36.501 05:10:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:36.501 05:10:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.501 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.501 05:10:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.502 05:10:13 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.502 05:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.502 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 [2024-04-24 05:10:13.700957] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.502 05:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.502 05:10:13 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.502 05:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.502 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 05:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.502 05:10:13 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.502 05:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.502 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 [2024-04-24 05:10:13.736769] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.502 05:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.502 05:10:13 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.502 05:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.502 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 NULL1 00:14:36.502 05:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.502 05:10:13 -- target/connect_stress.sh@21 -- # PERF_PID=1843521 00:14:36.502 05:10:13 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:36.502 05:10:13 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:36.502 05:10:13 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.502 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.502 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.770 05:10:13 -- target/connect_stress.sh@28 -- # cat 00:14:36.770 05:10:13 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:36.770 05:10:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.770 05:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.770 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:14:37.029 05:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.029 05:10:14 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:37.029 05:10:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.029 05:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.029 05:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.289 05:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.289 05:10:14 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:37.289 05:10:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.289 05:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.289 05:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.548 05:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.548 05:10:14 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:37.548 05:10:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.548 05:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.548 05:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.807 05:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.807 05:10:15 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:37.807 05:10:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.065 05:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.065 05:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.323 05:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.323 05:10:15 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:38.323 05:10:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.323 05:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.323 05:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.581 05:10:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.581 05:10:15 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:38.581 05:10:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.581 05:10:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.581 05:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.840 05:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.840 05:10:16 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:38.840 05:10:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.840 05:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.840 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.100 05:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.100 05:10:16 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:39.100 05:10:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.100 05:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.100 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.669 05:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.669 05:10:16 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:39.669 05:10:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.669 05:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.669 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.931 05:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.931 05:10:17 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:39.931 05:10:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.931 05:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.931 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:14:40.191 05:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.191 05:10:17 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:40.191 05:10:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.191 05:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.191 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:14:40.452 05:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.452 05:10:17 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:40.452 05:10:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.452 05:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.452 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:14:40.711 05:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.711 05:10:17 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:40.711 05:10:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.711 05:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.711 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:14:41.279 05:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.279 05:10:18 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:41.279 05:10:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.279 05:10:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.279 05:10:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.540 05:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.540 05:10:18 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:41.540 05:10:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.540 05:10:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.540 05:10:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.799 05:10:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.799 05:10:18 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:41.799 05:10:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.799 05:10:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.799 05:10:18 -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 05:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.057 05:10:19 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:42.057 05:10:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.057 05:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.057 05:10:19 -- common/autotest_common.sh@10 -- # set +x 00:14:42.316 05:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.316 05:10:19 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:42.316 05:10:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.316 05:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.316 05:10:19 -- common/autotest_common.sh@10 -- # set +x 00:14:42.886 05:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.886 05:10:19 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:42.886 05:10:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.886 05:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.886 05:10:19 -- common/autotest_common.sh@10 -- # set +x 00:14:43.143 05:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.143 05:10:20 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:43.143 05:10:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.143 05:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.143 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.402 05:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.402 05:10:20 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:43.402 05:10:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.402 05:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.402 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 05:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.660 05:10:20 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:43.660 05:10:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.660 05:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.660 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.918 05:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.918 05:10:21 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:43.918 05:10:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.918 05:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.918 05:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:44.486 05:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.486 05:10:21 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:44.486 05:10:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.486 05:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.486 05:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:44.746 05:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.746 05:10:21 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:44.746 05:10:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.746 05:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.746 05:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:45.004 05:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.004 05:10:22 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:45.004 05:10:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.004 05:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.004 05:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:45.262 05:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.262 05:10:22 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:45.262 05:10:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.262 05:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.262 05:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:45.521 05:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.521 05:10:22 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:45.521 05:10:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.521 05:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.521 05:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:46.087 05:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.087 05:10:23 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:46.087 05:10:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.087 05:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.087 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.347 05:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.347 05:10:23 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:46.347 05:10:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.347 05:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.347 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.605 05:10:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.605 05:10:23 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:46.605 05:10:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.605 05:10:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.605 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.862 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.862 05:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.862 05:10:24 -- target/connect_stress.sh@34 -- # kill -0 1843521 00:14:46.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1843521) - No such process 00:14:46.862 05:10:24 -- target/connect_stress.sh@38 -- # wait 1843521 00:14:46.862 05:10:24 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:46.862 05:10:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:46.862 05:10:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:46.862 05:10:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:46.862 05:10:24 -- nvmf/common.sh@117 -- # sync 00:14:46.862 05:10:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.862 05:10:24 -- nvmf/common.sh@120 -- # set +e 00:14:46.862 05:10:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.862 05:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.862 rmmod nvme_tcp 00:14:46.862 rmmod nvme_fabrics 00:14:46.862 rmmod nvme_keyring 00:14:47.121 05:10:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.121 05:10:24 -- nvmf/common.sh@124 -- # set -e 00:14:47.121 05:10:24 -- nvmf/common.sh@125 -- # return 0 00:14:47.121 05:10:24 -- nvmf/common.sh@478 -- # '[' -n 1843379 ']' 00:14:47.121 05:10:24 -- nvmf/common.sh@479 -- # killprocess 1843379 00:14:47.121 05:10:24 -- common/autotest_common.sh@936 -- # '[' -z 1843379 ']' 00:14:47.121 05:10:24 -- common/autotest_common.sh@940 -- # kill -0 1843379 00:14:47.121 05:10:24 -- common/autotest_common.sh@941 -- # uname 00:14:47.121 05:10:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.121 05:10:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1843379 00:14:47.121 05:10:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:47.121 05:10:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:47.121 05:10:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1843379' 00:14:47.121 killing process with pid 1843379 00:14:47.121 05:10:24 -- common/autotest_common.sh@955 -- # kill 1843379 00:14:47.121 05:10:24 -- common/autotest_common.sh@960 -- # wait 1843379 00:14:47.381 05:10:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:47.381 05:10:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:47.381 05:10:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:47.381 05:10:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.381 05:10:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.381 05:10:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.381 05:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.381 05:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.295 05:10:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.295 00:14:49.295 real 0m15.249s 00:14:49.295 user 0m38.251s 00:14:49.295 sys 0m5.942s 00:14:49.295 05:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.295 05:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:49.295 ************************************ 00:14:49.295 END TEST nvmf_connect_stress 00:14:49.295 ************************************ 00:14:49.295 05:10:26 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:49.295 05:10:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:49.295 05:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.295 05:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:49.553 ************************************ 00:14:49.553 START TEST nvmf_fused_ordering 00:14:49.553 ************************************ 00:14:49.553 05:10:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:49.553 * Looking for test storage... 00:14:49.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.553 05:10:26 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.553 05:10:26 -- nvmf/common.sh@7 -- # uname -s 00:14:49.553 05:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.553 05:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.553 05:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.553 05:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.553 05:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.553 05:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.553 05:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.553 05:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.553 05:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.553 05:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.553 05:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.553 05:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.553 05:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.553 05:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.553 05:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.553 05:10:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.553 05:10:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.553 05:10:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.554 05:10:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.554 05:10:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.554 05:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.554 05:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.554 05:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.554 05:10:26 -- paths/export.sh@5 -- # export PATH 00:14:49.554 05:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.554 05:10:26 -- nvmf/common.sh@47 -- # : 0 00:14:49.554 05:10:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.554 05:10:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.554 05:10:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.554 05:10:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.554 05:10:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.554 05:10:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.554 05:10:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.554 05:10:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.554 05:10:26 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:49.554 05:10:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:49.554 05:10:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.554 05:10:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:49.554 05:10:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:49.554 05:10:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:49.554 05:10:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.554 05:10:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.554 05:10:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.554 05:10:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:49.554 05:10:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:49.554 05:10:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.554 05:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:51.457 05:10:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:51.457 05:10:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.457 05:10:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.458 05:10:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.458 05:10:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.458 05:10:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.458 05:10:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.458 05:10:28 -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.458 05:10:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.458 05:10:28 -- nvmf/common.sh@296 -- # e810=() 00:14:51.458 05:10:28 -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.458 05:10:28 -- nvmf/common.sh@297 -- # x722=() 00:14:51.458 05:10:28 -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.458 05:10:28 -- nvmf/common.sh@298 -- # mlx=() 00:14:51.458 05:10:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.458 05:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.458 05:10:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.458 05:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:51.458 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:51.458 05:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.458 05:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:51.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:51.458 05:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.458 05:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.458 05:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.458 05:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:51.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:51.458 05:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.458 05:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.458 05:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.458 05:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:51.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:51.458 05:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:51.458 05:10:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:51.458 05:10:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.458 05:10:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.458 05:10:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.458 05:10:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.458 05:10:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.458 05:10:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.458 05:10:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.458 05:10:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.458 05:10:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.458 05:10:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.458 05:10:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.458 05:10:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.458 05:10:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.458 05:10:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.458 05:10:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.458 05:10:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.458 05:10:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.458 05:10:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.458 05:10:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:14:51.458 00:14:51.458 --- 10.0.0.2 ping statistics --- 00:14:51.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.458 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:14:51.458 05:10:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:14:51.458 00:14:51.458 --- 10.0.0.1 ping statistics --- 00:14:51.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.458 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:51.458 05:10:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.458 05:10:28 -- nvmf/common.sh@411 -- # return 0 00:14:51.458 05:10:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:51.458 05:10:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.458 05:10:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:51.458 05:10:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.458 05:10:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:51.458 05:10:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:51.717 05:10:28 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:51.717 05:10:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:51.717 05:10:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:51.717 05:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 05:10:28 -- nvmf/common.sh@470 -- # nvmfpid=1846675 00:14:51.717 05:10:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.717 05:10:28 -- nvmf/common.sh@471 -- # waitforlisten 1846675 00:14:51.717 05:10:28 -- common/autotest_common.sh@817 -- # '[' -z 1846675 ']' 00:14:51.717 05:10:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.717 05:10:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:51.717 05:10:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.717 05:10:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:51.717 05:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 [2024-04-24 05:10:28.783794] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:14:51.717 [2024-04-24 05:10:28.783869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.717 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.717 [2024-04-24 05:10:28.820618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:51.717 [2024-04-24 05:10:28.852654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.717 [2024-04-24 05:10:28.940330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.717 [2024-04-24 05:10:28.940395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.717 [2024-04-24 05:10:28.940418] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.717 [2024-04-24 05:10:28.940433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.717 [2024-04-24 05:10:28.940445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.717 [2024-04-24 05:10:28.940479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.976 05:10:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:51.976 05:10:29 -- common/autotest_common.sh@850 -- # return 0 00:14:51.976 05:10:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:51.976 05:10:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:51.976 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.976 05:10:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.976 05:10:29 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.976 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.976 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.976 [2024-04-24 05:10:29.094333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.976 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.976 05:10:29 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:51.976 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.976 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.976 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.976 05:10:29 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.977 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.977 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 [2024-04-24 05:10:29.110525] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.977 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.977 05:10:29 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:51.977 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.977 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 NULL1 00:14:51.977 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.977 05:10:29 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:51.977 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.977 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.977 05:10:29 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:51.977 05:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.977 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 05:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.977 05:10:29 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:51.977 [2024-04-24 05:10:29.155871] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:14:51.977 [2024-04-24 05:10:29.155916] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846696 ] 00:14:51.977 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.977 [2024-04-24 05:10:29.192492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:52.544 Attached to nqn.2016-06.io.spdk:cnode1 00:14:52.544 Namespace ID: 1 size: 1GB 00:14:52.544 fused_ordering(0) 00:14:52.544 fused_ordering(1) 00:14:52.544 fused_ordering(2) 00:14:52.544 fused_ordering(3) 00:14:52.544 fused_ordering(4) 00:14:52.544 fused_ordering(5) 00:14:52.544 fused_ordering(6) 00:14:52.544 fused_ordering(7) 00:14:52.544 fused_ordering(8) 00:14:52.544 fused_ordering(9) 00:14:52.544 fused_ordering(10) 00:14:52.544 fused_ordering(11) 00:14:52.544 fused_ordering(12) 00:14:52.544 fused_ordering(13) 00:14:52.544 fused_ordering(14) 00:14:52.544 fused_ordering(15) 00:14:52.544 fused_ordering(16) 00:14:52.544 fused_ordering(17) 00:14:52.544 fused_ordering(18) 00:14:52.544 fused_ordering(19) 00:14:52.544 fused_ordering(20) 00:14:52.544 fused_ordering(21) 00:14:52.544 fused_ordering(22) 00:14:52.544 fused_ordering(23) 00:14:52.544 fused_ordering(24) 00:14:52.544 fused_ordering(25) 00:14:52.544 fused_ordering(26) 00:14:52.544 fused_ordering(27) 00:14:52.544 fused_ordering(28) 00:14:52.544 fused_ordering(29) 00:14:52.544 fused_ordering(30) 00:14:52.544 fused_ordering(31) 00:14:52.544 fused_ordering(32) 00:14:52.544 fused_ordering(33) 00:14:52.544 fused_ordering(34) 00:14:52.544 fused_ordering(35) 00:14:52.544 fused_ordering(36) 00:14:52.544 fused_ordering(37) 00:14:52.544 fused_ordering(38) 00:14:52.544 fused_ordering(39) 00:14:52.544 fused_ordering(40) 00:14:52.544 fused_ordering(41) 00:14:52.544 fused_ordering(42) 00:14:52.544 fused_ordering(43) 00:14:52.544 fused_ordering(44) 00:14:52.544 fused_ordering(45) 00:14:52.544 fused_ordering(46) 00:14:52.544 fused_ordering(47) 00:14:52.544 fused_ordering(48) 00:14:52.544 fused_ordering(49) 00:14:52.544 fused_ordering(50) 00:14:52.544 fused_ordering(51) 00:14:52.544 fused_ordering(52) 00:14:52.544 fused_ordering(53) 00:14:52.544 fused_ordering(54) 00:14:52.544 fused_ordering(55) 00:14:52.544 fused_ordering(56) 00:14:52.544 fused_ordering(57) 00:14:52.544 fused_ordering(58) 00:14:52.544 fused_ordering(59) 00:14:52.544 fused_ordering(60) 00:14:52.544 fused_ordering(61) 00:14:52.544 fused_ordering(62) 00:14:52.544 fused_ordering(63) 00:14:52.544 fused_ordering(64) 00:14:52.544 fused_ordering(65) 00:14:52.544 fused_ordering(66) 00:14:52.544 fused_ordering(67) 00:14:52.544 fused_ordering(68) 00:14:52.544 fused_ordering(69) 00:14:52.544 fused_ordering(70) 00:14:52.544 fused_ordering(71) 00:14:52.544 fused_ordering(72) 00:14:52.544 fused_ordering(73) 00:14:52.544 fused_ordering(74) 00:14:52.544 fused_ordering(75) 00:14:52.544 fused_ordering(76) 00:14:52.544 fused_ordering(77) 00:14:52.544 fused_ordering(78) 00:14:52.544 fused_ordering(79) 00:14:52.544 fused_ordering(80) 00:14:52.544 fused_ordering(81) 00:14:52.544 fused_ordering(82) 00:14:52.544 fused_ordering(83) 00:14:52.544 fused_ordering(84) 00:14:52.544 fused_ordering(85) 00:14:52.544 fused_ordering(86) 00:14:52.544 fused_ordering(87) 00:14:52.544 fused_ordering(88) 00:14:52.544 fused_ordering(89) 00:14:52.544 fused_ordering(90) 00:14:52.544 fused_ordering(91) 00:14:52.544 fused_ordering(92) 00:14:52.544 fused_ordering(93) 00:14:52.544 fused_ordering(94) 00:14:52.544 fused_ordering(95) 00:14:52.544 fused_ordering(96) 00:14:52.544 fused_ordering(97) 00:14:52.544 fused_ordering(98) 00:14:52.544 fused_ordering(99) 00:14:52.544 fused_ordering(100) 00:14:52.544 fused_ordering(101) 00:14:52.544 fused_ordering(102) 00:14:52.544 fused_ordering(103) 00:14:52.544 fused_ordering(104) 00:14:52.544 fused_ordering(105) 00:14:52.544 fused_ordering(106) 00:14:52.544 fused_ordering(107) 00:14:52.544 fused_ordering(108) 00:14:52.544 fused_ordering(109) 00:14:52.544 fused_ordering(110) 00:14:52.544 fused_ordering(111) 00:14:52.544 fused_ordering(112) 00:14:52.544 fused_ordering(113) 00:14:52.544 fused_ordering(114) 00:14:52.544 fused_ordering(115) 00:14:52.544 fused_ordering(116) 00:14:52.544 fused_ordering(117) 00:14:52.544 fused_ordering(118) 00:14:52.544 fused_ordering(119) 00:14:52.544 fused_ordering(120) 00:14:52.544 fused_ordering(121) 00:14:52.544 fused_ordering(122) 00:14:52.544 fused_ordering(123) 00:14:52.544 fused_ordering(124) 00:14:52.544 fused_ordering(125) 00:14:52.544 fused_ordering(126) 00:14:52.544 fused_ordering(127) 00:14:52.544 fused_ordering(128) 00:14:52.544 fused_ordering(129) 00:14:52.544 fused_ordering(130) 00:14:52.544 fused_ordering(131) 00:14:52.544 fused_ordering(132) 00:14:52.544 fused_ordering(133) 00:14:52.544 fused_ordering(134) 00:14:52.544 fused_ordering(135) 00:14:52.544 fused_ordering(136) 00:14:52.544 fused_ordering(137) 00:14:52.544 fused_ordering(138) 00:14:52.544 fused_ordering(139) 00:14:52.544 fused_ordering(140) 00:14:52.544 fused_ordering(141) 00:14:52.544 fused_ordering(142) 00:14:52.544 fused_ordering(143) 00:14:52.544 fused_ordering(144) 00:14:52.544 fused_ordering(145) 00:14:52.544 fused_ordering(146) 00:14:52.544 fused_ordering(147) 00:14:52.544 fused_ordering(148) 00:14:52.544 fused_ordering(149) 00:14:52.544 fused_ordering(150) 00:14:52.544 fused_ordering(151) 00:14:52.544 fused_ordering(152) 00:14:52.544 fused_ordering(153) 00:14:52.544 fused_ordering(154) 00:14:52.544 fused_ordering(155) 00:14:52.544 fused_ordering(156) 00:14:52.544 fused_ordering(157) 00:14:52.544 fused_ordering(158) 00:14:52.544 fused_ordering(159) 00:14:52.544 fused_ordering(160) 00:14:52.544 fused_ordering(161) 00:14:52.544 fused_ordering(162) 00:14:52.544 fused_ordering(163) 00:14:52.544 fused_ordering(164) 00:14:52.544 fused_ordering(165) 00:14:52.544 fused_ordering(166) 00:14:52.544 fused_ordering(167) 00:14:52.544 fused_ordering(168) 00:14:52.544 fused_ordering(169) 00:14:52.544 fused_ordering(170) 00:14:52.544 fused_ordering(171) 00:14:52.544 fused_ordering(172) 00:14:52.544 fused_ordering(173) 00:14:52.544 fused_ordering(174) 00:14:52.544 fused_ordering(175) 00:14:52.544 fused_ordering(176) 00:14:52.544 fused_ordering(177) 00:14:52.544 fused_ordering(178) 00:14:52.544 fused_ordering(179) 00:14:52.544 fused_ordering(180) 00:14:52.544 fused_ordering(181) 00:14:52.544 fused_ordering(182) 00:14:52.544 fused_ordering(183) 00:14:52.544 fused_ordering(184) 00:14:52.544 fused_ordering(185) 00:14:52.544 fused_ordering(186) 00:14:52.544 fused_ordering(187) 00:14:52.544 fused_ordering(188) 00:14:52.544 fused_ordering(189) 00:14:52.544 fused_ordering(190) 00:14:52.544 fused_ordering(191) 00:14:52.544 fused_ordering(192) 00:14:52.544 fused_ordering(193) 00:14:52.544 fused_ordering(194) 00:14:52.544 fused_ordering(195) 00:14:52.544 fused_ordering(196) 00:14:52.544 fused_ordering(197) 00:14:52.544 fused_ordering(198) 00:14:52.544 fused_ordering(199) 00:14:52.544 fused_ordering(200) 00:14:52.544 fused_ordering(201) 00:14:52.544 fused_ordering(202) 00:14:52.544 fused_ordering(203) 00:14:52.544 fused_ordering(204) 00:14:52.544 fused_ordering(205) 00:14:53.111 fused_ordering(206) 00:14:53.111 fused_ordering(207) 00:14:53.111 fused_ordering(208) 00:14:53.111 fused_ordering(209) 00:14:53.111 fused_ordering(210) 00:14:53.111 fused_ordering(211) 00:14:53.111 fused_ordering(212) 00:14:53.111 fused_ordering(213) 00:14:53.111 fused_ordering(214) 00:14:53.111 fused_ordering(215) 00:14:53.111 fused_ordering(216) 00:14:53.111 fused_ordering(217) 00:14:53.111 fused_ordering(218) 00:14:53.111 fused_ordering(219) 00:14:53.111 fused_ordering(220) 00:14:53.111 fused_ordering(221) 00:14:53.111 fused_ordering(222) 00:14:53.111 fused_ordering(223) 00:14:53.111 fused_ordering(224) 00:14:53.111 fused_ordering(225) 00:14:53.111 fused_ordering(226) 00:14:53.111 fused_ordering(227) 00:14:53.111 fused_ordering(228) 00:14:53.111 fused_ordering(229) 00:14:53.111 fused_ordering(230) 00:14:53.111 fused_ordering(231) 00:14:53.111 fused_ordering(232) 00:14:53.111 fused_ordering(233) 00:14:53.112 fused_ordering(234) 00:14:53.112 fused_ordering(235) 00:14:53.112 fused_ordering(236) 00:14:53.112 fused_ordering(237) 00:14:53.112 fused_ordering(238) 00:14:53.112 fused_ordering(239) 00:14:53.112 fused_ordering(240) 00:14:53.112 fused_ordering(241) 00:14:53.112 fused_ordering(242) 00:14:53.112 fused_ordering(243) 00:14:53.112 fused_ordering(244) 00:14:53.112 fused_ordering(245) 00:14:53.112 fused_ordering(246) 00:14:53.112 fused_ordering(247) 00:14:53.112 fused_ordering(248) 00:14:53.112 fused_ordering(249) 00:14:53.112 fused_ordering(250) 00:14:53.112 fused_ordering(251) 00:14:53.112 fused_ordering(252) 00:14:53.112 fused_ordering(253) 00:14:53.112 fused_ordering(254) 00:14:53.112 fused_ordering(255) 00:14:53.112 fused_ordering(256) 00:14:53.112 fused_ordering(257) 00:14:53.112 fused_ordering(258) 00:14:53.112 fused_ordering(259) 00:14:53.112 fused_ordering(260) 00:14:53.112 fused_ordering(261) 00:14:53.112 fused_ordering(262) 00:14:53.112 fused_ordering(263) 00:14:53.112 fused_ordering(264) 00:14:53.112 fused_ordering(265) 00:14:53.112 fused_ordering(266) 00:14:53.112 fused_ordering(267) 00:14:53.112 fused_ordering(268) 00:14:53.112 fused_ordering(269) 00:14:53.112 fused_ordering(270) 00:14:53.112 fused_ordering(271) 00:14:53.112 fused_ordering(272) 00:14:53.112 fused_ordering(273) 00:14:53.112 fused_ordering(274) 00:14:53.112 fused_ordering(275) 00:14:53.112 fused_ordering(276) 00:14:53.112 fused_ordering(277) 00:14:53.112 fused_ordering(278) 00:14:53.112 fused_ordering(279) 00:14:53.112 fused_ordering(280) 00:14:53.112 fused_ordering(281) 00:14:53.112 fused_ordering(282) 00:14:53.112 fused_ordering(283) 00:14:53.112 fused_ordering(284) 00:14:53.112 fused_ordering(285) 00:14:53.112 fused_ordering(286) 00:14:53.112 fused_ordering(287) 00:14:53.112 fused_ordering(288) 00:14:53.112 fused_ordering(289) 00:14:53.112 fused_ordering(290) 00:14:53.112 fused_ordering(291) 00:14:53.112 fused_ordering(292) 00:14:53.112 fused_ordering(293) 00:14:53.112 fused_ordering(294) 00:14:53.112 fused_ordering(295) 00:14:53.112 fused_ordering(296) 00:14:53.112 fused_ordering(297) 00:14:53.112 fused_ordering(298) 00:14:53.112 fused_ordering(299) 00:14:53.112 fused_ordering(300) 00:14:53.112 fused_ordering(301) 00:14:53.112 fused_ordering(302) 00:14:53.112 fused_ordering(303) 00:14:53.112 fused_ordering(304) 00:14:53.112 fused_ordering(305) 00:14:53.112 fused_ordering(306) 00:14:53.112 fused_ordering(307) 00:14:53.112 fused_ordering(308) 00:14:53.112 fused_ordering(309) 00:14:53.112 fused_ordering(310) 00:14:53.112 fused_ordering(311) 00:14:53.112 fused_ordering(312) 00:14:53.112 fused_ordering(313) 00:14:53.112 fused_ordering(314) 00:14:53.112 fused_ordering(315) 00:14:53.112 fused_ordering(316) 00:14:53.112 fused_ordering(317) 00:14:53.112 fused_ordering(318) 00:14:53.112 fused_ordering(319) 00:14:53.112 fused_ordering(320) 00:14:53.112 fused_ordering(321) 00:14:53.112 fused_ordering(322) 00:14:53.112 fused_ordering(323) 00:14:53.112 fused_ordering(324) 00:14:53.112 fused_ordering(325) 00:14:53.112 fused_ordering(326) 00:14:53.112 fused_ordering(327) 00:14:53.112 fused_ordering(328) 00:14:53.112 fused_ordering(329) 00:14:53.112 fused_ordering(330) 00:14:53.112 fused_ordering(331) 00:14:53.112 fused_ordering(332) 00:14:53.112 fused_ordering(333) 00:14:53.112 fused_ordering(334) 00:14:53.112 fused_ordering(335) 00:14:53.112 fused_ordering(336) 00:14:53.112 fused_ordering(337) 00:14:53.112 fused_ordering(338) 00:14:53.112 fused_ordering(339) 00:14:53.112 fused_ordering(340) 00:14:53.112 fused_ordering(341) 00:14:53.112 fused_ordering(342) 00:14:53.112 fused_ordering(343) 00:14:53.112 fused_ordering(344) 00:14:53.112 fused_ordering(345) 00:14:53.112 fused_ordering(346) 00:14:53.112 fused_ordering(347) 00:14:53.112 fused_ordering(348) 00:14:53.112 fused_ordering(349) 00:14:53.112 fused_ordering(350) 00:14:53.112 fused_ordering(351) 00:14:53.112 fused_ordering(352) 00:14:53.112 fused_ordering(353) 00:14:53.112 fused_ordering(354) 00:14:53.112 fused_ordering(355) 00:14:53.112 fused_ordering(356) 00:14:53.112 fused_ordering(357) 00:14:53.112 fused_ordering(358) 00:14:53.112 fused_ordering(359) 00:14:53.112 fused_ordering(360) 00:14:53.112 fused_ordering(361) 00:14:53.112 fused_ordering(362) 00:14:53.112 fused_ordering(363) 00:14:53.112 fused_ordering(364) 00:14:53.112 fused_ordering(365) 00:14:53.112 fused_ordering(366) 00:14:53.112 fused_ordering(367) 00:14:53.112 fused_ordering(368) 00:14:53.112 fused_ordering(369) 00:14:53.112 fused_ordering(370) 00:14:53.112 fused_ordering(371) 00:14:53.112 fused_ordering(372) 00:14:53.112 fused_ordering(373) 00:14:53.112 fused_ordering(374) 00:14:53.112 fused_ordering(375) 00:14:53.112 fused_ordering(376) 00:14:53.112 fused_ordering(377) 00:14:53.112 fused_ordering(378) 00:14:53.112 fused_ordering(379) 00:14:53.112 fused_ordering(380) 00:14:53.112 fused_ordering(381) 00:14:53.112 fused_ordering(382) 00:14:53.112 fused_ordering(383) 00:14:53.112 fused_ordering(384) 00:14:53.112 fused_ordering(385) 00:14:53.112 fused_ordering(386) 00:14:53.112 fused_ordering(387) 00:14:53.112 fused_ordering(388) 00:14:53.112 fused_ordering(389) 00:14:53.112 fused_ordering(390) 00:14:53.112 fused_ordering(391) 00:14:53.112 fused_ordering(392) 00:14:53.112 fused_ordering(393) 00:14:53.112 fused_ordering(394) 00:14:53.112 fused_ordering(395) 00:14:53.112 fused_ordering(396) 00:14:53.112 fused_ordering(397) 00:14:53.112 fused_ordering(398) 00:14:53.112 fused_ordering(399) 00:14:53.112 fused_ordering(400) 00:14:53.112 fused_ordering(401) 00:14:53.112 fused_ordering(402) 00:14:53.112 fused_ordering(403) 00:14:53.112 fused_ordering(404) 00:14:53.112 fused_ordering(405) 00:14:53.112 fused_ordering(406) 00:14:53.112 fused_ordering(407) 00:14:53.112 fused_ordering(408) 00:14:53.112 fused_ordering(409) 00:14:53.112 fused_ordering(410) 00:14:53.679 fused_ordering(411) 00:14:53.679 fused_ordering(412) 00:14:53.679 fused_ordering(413) 00:14:53.679 fused_ordering(414) 00:14:53.679 fused_ordering(415) 00:14:53.679 fused_ordering(416) 00:14:53.679 fused_ordering(417) 00:14:53.679 fused_ordering(418) 00:14:53.679 fused_ordering(419) 00:14:53.679 fused_ordering(420) 00:14:53.679 fused_ordering(421) 00:14:53.679 fused_ordering(422) 00:14:53.679 fused_ordering(423) 00:14:53.679 fused_ordering(424) 00:14:53.679 fused_ordering(425) 00:14:53.679 fused_ordering(426) 00:14:53.679 fused_ordering(427) 00:14:53.679 fused_ordering(428) 00:14:53.679 fused_ordering(429) 00:14:53.679 fused_ordering(430) 00:14:53.679 fused_ordering(431) 00:14:53.679 fused_ordering(432) 00:14:53.679 fused_ordering(433) 00:14:53.679 fused_ordering(434) 00:14:53.679 fused_ordering(435) 00:14:53.679 fused_ordering(436) 00:14:53.679 fused_ordering(437) 00:14:53.679 fused_ordering(438) 00:14:53.679 fused_ordering(439) 00:14:53.679 fused_ordering(440) 00:14:53.679 fused_ordering(441) 00:14:53.679 fused_ordering(442) 00:14:53.679 fused_ordering(443) 00:14:53.679 fused_ordering(444) 00:14:53.679 fused_ordering(445) 00:14:53.679 fused_ordering(446) 00:14:53.679 fused_ordering(447) 00:14:53.679 fused_ordering(448) 00:14:53.679 fused_ordering(449) 00:14:53.679 fused_ordering(450) 00:14:53.679 fused_ordering(451) 00:14:53.679 fused_ordering(452) 00:14:53.679 fused_ordering(453) 00:14:53.679 fused_ordering(454) 00:14:53.679 fused_ordering(455) 00:14:53.679 fused_ordering(456) 00:14:53.679 fused_ordering(457) 00:14:53.679 fused_ordering(458) 00:14:53.679 fused_ordering(459) 00:14:53.679 fused_ordering(460) 00:14:53.679 fused_ordering(461) 00:14:53.679 fused_ordering(462) 00:14:53.679 fused_ordering(463) 00:14:53.679 fused_ordering(464) 00:14:53.679 fused_ordering(465) 00:14:53.679 fused_ordering(466) 00:14:53.679 fused_ordering(467) 00:14:53.679 fused_ordering(468) 00:14:53.679 fused_ordering(469) 00:14:53.679 fused_ordering(470) 00:14:53.679 fused_ordering(471) 00:14:53.679 fused_ordering(472) 00:14:53.679 fused_ordering(473) 00:14:53.679 fused_ordering(474) 00:14:53.679 fused_ordering(475) 00:14:53.679 fused_ordering(476) 00:14:53.679 fused_ordering(477) 00:14:53.679 fused_ordering(478) 00:14:53.679 fused_ordering(479) 00:14:53.679 fused_ordering(480) 00:14:53.679 fused_ordering(481) 00:14:53.679 fused_ordering(482) 00:14:53.679 fused_ordering(483) 00:14:53.679 fused_ordering(484) 00:14:53.679 fused_ordering(485) 00:14:53.679 fused_ordering(486) 00:14:53.679 fused_ordering(487) 00:14:53.679 fused_ordering(488) 00:14:53.679 fused_ordering(489) 00:14:53.679 fused_ordering(490) 00:14:53.679 fused_ordering(491) 00:14:53.679 fused_ordering(492) 00:14:53.679 fused_ordering(493) 00:14:53.679 fused_ordering(494) 00:14:53.679 fused_ordering(495) 00:14:53.679 fused_ordering(496) 00:14:53.679 fused_ordering(497) 00:14:53.679 fused_ordering(498) 00:14:53.679 fused_ordering(499) 00:14:53.679 fused_ordering(500) 00:14:53.679 fused_ordering(501) 00:14:53.679 fused_ordering(502) 00:14:53.679 fused_ordering(503) 00:14:53.679 fused_ordering(504) 00:14:53.679 fused_ordering(505) 00:14:53.679 fused_ordering(506) 00:14:53.679 fused_ordering(507) 00:14:53.679 fused_ordering(508) 00:14:53.679 fused_ordering(509) 00:14:53.679 fused_ordering(510) 00:14:53.679 fused_ordering(511) 00:14:53.679 fused_ordering(512) 00:14:53.679 fused_ordering(513) 00:14:53.679 fused_ordering(514) 00:14:53.679 fused_ordering(515) 00:14:53.679 fused_ordering(516) 00:14:53.679 fused_ordering(517) 00:14:53.679 fused_ordering(518) 00:14:53.679 fused_ordering(519) 00:14:53.679 fused_ordering(520) 00:14:53.679 fused_ordering(521) 00:14:53.679 fused_ordering(522) 00:14:53.679 fused_ordering(523) 00:14:53.679 fused_ordering(524) 00:14:53.679 fused_ordering(525) 00:14:53.679 fused_ordering(526) 00:14:53.679 fused_ordering(527) 00:14:53.679 fused_ordering(528) 00:14:53.679 fused_ordering(529) 00:14:53.679 fused_ordering(530) 00:14:53.679 fused_ordering(531) 00:14:53.679 fused_ordering(532) 00:14:53.679 fused_ordering(533) 00:14:53.679 fused_ordering(534) 00:14:53.679 fused_ordering(535) 00:14:53.679 fused_ordering(536) 00:14:53.679 fused_ordering(537) 00:14:53.679 fused_ordering(538) 00:14:53.679 fused_ordering(539) 00:14:53.679 fused_ordering(540) 00:14:53.679 fused_ordering(541) 00:14:53.679 fused_ordering(542) 00:14:53.679 fused_ordering(543) 00:14:53.679 fused_ordering(544) 00:14:53.679 fused_ordering(545) 00:14:53.679 fused_ordering(546) 00:14:53.679 fused_ordering(547) 00:14:53.679 fused_ordering(548) 00:14:53.679 fused_ordering(549) 00:14:53.679 fused_ordering(550) 00:14:53.679 fused_ordering(551) 00:14:53.679 fused_ordering(552) 00:14:53.679 fused_ordering(553) 00:14:53.679 fused_ordering(554) 00:14:53.679 fused_ordering(555) 00:14:53.679 fused_ordering(556) 00:14:53.679 fused_ordering(557) 00:14:53.679 fused_ordering(558) 00:14:53.679 fused_ordering(559) 00:14:53.679 fused_ordering(560) 00:14:53.679 fused_ordering(561) 00:14:53.679 fused_ordering(562) 00:14:53.679 fused_ordering(563) 00:14:53.679 fused_ordering(564) 00:14:53.679 fused_ordering(565) 00:14:53.679 fused_ordering(566) 00:14:53.679 fused_ordering(567) 00:14:53.679 fused_ordering(568) 00:14:53.679 fused_ordering(569) 00:14:53.679 fused_ordering(570) 00:14:53.679 fused_ordering(571) 00:14:53.679 fused_ordering(572) 00:14:53.679 fused_ordering(573) 00:14:53.679 fused_ordering(574) 00:14:53.679 fused_ordering(575) 00:14:53.679 fused_ordering(576) 00:14:53.679 fused_ordering(577) 00:14:53.679 fused_ordering(578) 00:14:53.679 fused_ordering(579) 00:14:53.679 fused_ordering(580) 00:14:53.679 fused_ordering(581) 00:14:53.679 fused_ordering(582) 00:14:53.679 fused_ordering(583) 00:14:53.679 fused_ordering(584) 00:14:53.679 fused_ordering(585) 00:14:53.679 fused_ordering(586) 00:14:53.679 fused_ordering(587) 00:14:53.679 fused_ordering(588) 00:14:53.679 fused_ordering(589) 00:14:53.679 fused_ordering(590) 00:14:53.679 fused_ordering(591) 00:14:53.679 fused_ordering(592) 00:14:53.679 fused_ordering(593) 00:14:53.679 fused_ordering(594) 00:14:53.679 fused_ordering(595) 00:14:53.679 fused_ordering(596) 00:14:53.679 fused_ordering(597) 00:14:53.679 fused_ordering(598) 00:14:53.679 fused_ordering(599) 00:14:53.679 fused_ordering(600) 00:14:53.679 fused_ordering(601) 00:14:53.679 fused_ordering(602) 00:14:53.679 fused_ordering(603) 00:14:53.680 fused_ordering(604) 00:14:53.680 fused_ordering(605) 00:14:53.680 fused_ordering(606) 00:14:53.680 fused_ordering(607) 00:14:53.680 fused_ordering(608) 00:14:53.680 fused_ordering(609) 00:14:53.680 fused_ordering(610) 00:14:53.680 fused_ordering(611) 00:14:53.680 fused_ordering(612) 00:14:53.680 fused_ordering(613) 00:14:53.680 fused_ordering(614) 00:14:53.680 fused_ordering(615) 00:14:54.246 fused_ordering(616) 00:14:54.246 fused_ordering(617) 00:14:54.246 fused_ordering(618) 00:14:54.246 fused_ordering(619) 00:14:54.246 fused_ordering(620) 00:14:54.246 fused_ordering(621) 00:14:54.246 fused_ordering(622) 00:14:54.246 fused_ordering(623) 00:14:54.246 fused_ordering(624) 00:14:54.246 fused_ordering(625) 00:14:54.246 fused_ordering(626) 00:14:54.246 fused_ordering(627) 00:14:54.246 fused_ordering(628) 00:14:54.246 fused_ordering(629) 00:14:54.246 fused_ordering(630) 00:14:54.246 fused_ordering(631) 00:14:54.246 fused_ordering(632) 00:14:54.246 fused_ordering(633) 00:14:54.246 fused_ordering(634) 00:14:54.246 fused_ordering(635) 00:14:54.246 fused_ordering(636) 00:14:54.246 fused_ordering(637) 00:14:54.246 fused_ordering(638) 00:14:54.246 fused_ordering(639) 00:14:54.246 fused_ordering(640) 00:14:54.246 fused_ordering(641) 00:14:54.246 fused_ordering(642) 00:14:54.246 fused_ordering(643) 00:14:54.246 fused_ordering(644) 00:14:54.246 fused_ordering(645) 00:14:54.246 fused_ordering(646) 00:14:54.246 fused_ordering(647) 00:14:54.246 fused_ordering(648) 00:14:54.246 fused_ordering(649) 00:14:54.246 fused_ordering(650) 00:14:54.246 fused_ordering(651) 00:14:54.246 fused_ordering(652) 00:14:54.246 fused_ordering(653) 00:14:54.246 fused_ordering(654) 00:14:54.246 fused_ordering(655) 00:14:54.246 fused_ordering(656) 00:14:54.246 fused_ordering(657) 00:14:54.246 fused_ordering(658) 00:14:54.246 fused_ordering(659) 00:14:54.246 fused_ordering(660) 00:14:54.246 fused_ordering(661) 00:14:54.246 fused_ordering(662) 00:14:54.246 fused_ordering(663) 00:14:54.246 fused_ordering(664) 00:14:54.246 fused_ordering(665) 00:14:54.246 fused_ordering(666) 00:14:54.246 fused_ordering(667) 00:14:54.246 fused_ordering(668) 00:14:54.246 fused_ordering(669) 00:14:54.246 fused_ordering(670) 00:14:54.246 fused_ordering(671) 00:14:54.246 fused_ordering(672) 00:14:54.246 fused_ordering(673) 00:14:54.246 fused_ordering(674) 00:14:54.246 fused_ordering(675) 00:14:54.246 fused_ordering(676) 00:14:54.246 fused_ordering(677) 00:14:54.246 fused_ordering(678) 00:14:54.246 fused_ordering(679) 00:14:54.246 fused_ordering(680) 00:14:54.246 fused_ordering(681) 00:14:54.246 fused_ordering(682) 00:14:54.246 fused_ordering(683) 00:14:54.246 fused_ordering(684) 00:14:54.246 fused_ordering(685) 00:14:54.246 fused_ordering(686) 00:14:54.246 fused_ordering(687) 00:14:54.246 fused_ordering(688) 00:14:54.246 fused_ordering(689) 00:14:54.246 fused_ordering(690) 00:14:54.246 fused_ordering(691) 00:14:54.246 fused_ordering(692) 00:14:54.246 fused_ordering(693) 00:14:54.246 fused_ordering(694) 00:14:54.246 fused_ordering(695) 00:14:54.246 fused_ordering(696) 00:14:54.246 fused_ordering(697) 00:14:54.246 fused_ordering(698) 00:14:54.246 fused_ordering(699) 00:14:54.246 fused_ordering(700) 00:14:54.246 fused_ordering(701) 00:14:54.246 fused_ordering(702) 00:14:54.246 fused_ordering(703) 00:14:54.246 fused_ordering(704) 00:14:54.246 fused_ordering(705) 00:14:54.246 fused_ordering(706) 00:14:54.246 fused_ordering(707) 00:14:54.246 fused_ordering(708) 00:14:54.246 fused_ordering(709) 00:14:54.246 fused_ordering(710) 00:14:54.246 fused_ordering(711) 00:14:54.246 fused_ordering(712) 00:14:54.246 fused_ordering(713) 00:14:54.246 fused_ordering(714) 00:14:54.246 fused_ordering(715) 00:14:54.246 fused_ordering(716) 00:14:54.246 fused_ordering(717) 00:14:54.246 fused_ordering(718) 00:14:54.246 fused_ordering(719) 00:14:54.246 fused_ordering(720) 00:14:54.246 fused_ordering(721) 00:14:54.246 fused_ordering(722) 00:14:54.246 fused_ordering(723) 00:14:54.246 fused_ordering(724) 00:14:54.246 fused_ordering(725) 00:14:54.246 fused_ordering(726) 00:14:54.246 fused_ordering(727) 00:14:54.246 fused_ordering(728) 00:14:54.246 fused_ordering(729) 00:14:54.246 fused_ordering(730) 00:14:54.246 fused_ordering(731) 00:14:54.246 fused_ordering(732) 00:14:54.246 fused_ordering(733) 00:14:54.246 fused_ordering(734) 00:14:54.246 fused_ordering(735) 00:14:54.246 fused_ordering(736) 00:14:54.246 fused_ordering(737) 00:14:54.246 fused_ordering(738) 00:14:54.246 fused_ordering(739) 00:14:54.246 fused_ordering(740) 00:14:54.246 fused_ordering(741) 00:14:54.246 fused_ordering(742) 00:14:54.246 fused_ordering(743) 00:14:54.246 fused_ordering(744) 00:14:54.246 fused_ordering(745) 00:14:54.246 fused_ordering(746) 00:14:54.246 fused_ordering(747) 00:14:54.246 fused_ordering(748) 00:14:54.246 fused_ordering(749) 00:14:54.246 fused_ordering(750) 00:14:54.247 fused_ordering(751) 00:14:54.247 fused_ordering(752) 00:14:54.247 fused_ordering(753) 00:14:54.247 fused_ordering(754) 00:14:54.247 fused_ordering(755) 00:14:54.247 fused_ordering(756) 00:14:54.247 fused_ordering(757) 00:14:54.247 fused_ordering(758) 00:14:54.247 fused_ordering(759) 00:14:54.247 fused_ordering(760) 00:14:54.247 fused_ordering(761) 00:14:54.247 fused_ordering(762) 00:14:54.247 fused_ordering(763) 00:14:54.247 fused_ordering(764) 00:14:54.247 fused_ordering(765) 00:14:54.247 fused_ordering(766) 00:14:54.247 fused_ordering(767) 00:14:54.247 fused_ordering(768) 00:14:54.247 fused_ordering(769) 00:14:54.247 fused_ordering(770) 00:14:54.247 fused_ordering(771) 00:14:54.247 fused_ordering(772) 00:14:54.247 fused_ordering(773) 00:14:54.247 fused_ordering(774) 00:14:54.247 fused_ordering(775) 00:14:54.247 fused_ordering(776) 00:14:54.247 fused_ordering(777) 00:14:54.247 fused_ordering(778) 00:14:54.247 fused_ordering(779) 00:14:54.247 fused_ordering(780) 00:14:54.247 fused_ordering(781) 00:14:54.247 fused_ordering(782) 00:14:54.247 fused_ordering(783) 00:14:54.247 fused_ordering(784) 00:14:54.247 fused_ordering(785) 00:14:54.247 fused_ordering(786) 00:14:54.247 fused_ordering(787) 00:14:54.247 fused_ordering(788) 00:14:54.247 fused_ordering(789) 00:14:54.247 fused_ordering(790) 00:14:54.247 fused_ordering(791) 00:14:54.247 fused_ordering(792) 00:14:54.247 fused_ordering(793) 00:14:54.247 fused_ordering(794) 00:14:54.247 fused_ordering(795) 00:14:54.247 fused_ordering(796) 00:14:54.247 fused_ordering(797) 00:14:54.247 fused_ordering(798) 00:14:54.247 fused_ordering(799) 00:14:54.247 fused_ordering(800) 00:14:54.247 fused_ordering(801) 00:14:54.247 fused_ordering(802) 00:14:54.247 fused_ordering(803) 00:14:54.247 fused_ordering(804) 00:14:54.247 fused_ordering(805) 00:14:54.247 fused_ordering(806) 00:14:54.247 fused_ordering(807) 00:14:54.247 fused_ordering(808) 00:14:54.247 fused_ordering(809) 00:14:54.247 fused_ordering(810) 00:14:54.247 fused_ordering(811) 00:14:54.247 fused_ordering(812) 00:14:54.247 fused_ordering(813) 00:14:54.247 fused_ordering(814) 00:14:54.247 fused_ordering(815) 00:14:54.247 fused_ordering(816) 00:14:54.247 fused_ordering(817) 00:14:54.247 fused_ordering(818) 00:14:54.247 fused_ordering(819) 00:14:54.247 fused_ordering(820) 00:14:54.813 fused_ordering(821) 00:14:54.813 fused_ordering(822) 00:14:54.813 fused_ordering(823) 00:14:54.813 fused_ordering(824) 00:14:54.813 fused_ordering(825) 00:14:54.813 fused_ordering(826) 00:14:54.813 fused_ordering(827) 00:14:54.813 fused_ordering(828) 00:14:54.813 fused_ordering(829) 00:14:54.813 fused_ordering(830) 00:14:54.813 fused_ordering(831) 00:14:54.813 fused_ordering(832) 00:14:54.813 fused_ordering(833) 00:14:54.813 fused_ordering(834) 00:14:54.813 fused_ordering(835) 00:14:54.813 fused_ordering(836) 00:14:54.813 fused_ordering(837) 00:14:54.813 fused_ordering(838) 00:14:54.813 fused_ordering(839) 00:14:54.813 fused_ordering(840) 00:14:54.813 fused_ordering(841) 00:14:54.813 fused_ordering(842) 00:14:54.813 fused_ordering(843) 00:14:54.813 fused_ordering(844) 00:14:54.813 fused_ordering(845) 00:14:54.813 fused_ordering(846) 00:14:54.813 fused_ordering(847) 00:14:54.813 fused_ordering(848) 00:14:54.813 fused_ordering(849) 00:14:54.813 fused_ordering(850) 00:14:54.813 fused_ordering(851) 00:14:54.813 fused_ordering(852) 00:14:54.813 fused_ordering(853) 00:14:54.813 fused_ordering(854) 00:14:54.813 fused_ordering(855) 00:14:54.813 fused_ordering(856) 00:14:54.813 fused_ordering(857) 00:14:54.813 fused_ordering(858) 00:14:54.813 fused_ordering(859) 00:14:54.813 fused_ordering(860) 00:14:54.813 fused_ordering(861) 00:14:54.813 fused_ordering(862) 00:14:54.813 fused_ordering(863) 00:14:54.813 fused_ordering(864) 00:14:54.813 fused_ordering(865) 00:14:54.813 fused_ordering(866) 00:14:54.813 fused_ordering(867) 00:14:54.813 fused_ordering(868) 00:14:54.813 fused_ordering(869) 00:14:54.813 fused_ordering(870) 00:14:54.813 fused_ordering(871) 00:14:54.813 fused_ordering(872) 00:14:54.813 fused_ordering(873) 00:14:54.813 fused_ordering(874) 00:14:54.813 fused_ordering(875) 00:14:54.813 fused_ordering(876) 00:14:54.813 fused_ordering(877) 00:14:54.813 fused_ordering(878) 00:14:54.813 fused_ordering(879) 00:14:54.813 fused_ordering(880) 00:14:54.813 fused_ordering(881) 00:14:54.813 fused_ordering(882) 00:14:54.813 fused_ordering(883) 00:14:54.813 fused_ordering(884) 00:14:54.813 fused_ordering(885) 00:14:54.813 fused_ordering(886) 00:14:54.813 fused_ordering(887) 00:14:54.813 fused_ordering(888) 00:14:54.813 fused_ordering(889) 00:14:54.813 fused_ordering(890) 00:14:54.813 fused_ordering(891) 00:14:54.813 fused_ordering(892) 00:14:54.813 fused_ordering(893) 00:14:54.813 fused_ordering(894) 00:14:54.813 fused_ordering(895) 00:14:54.813 fused_ordering(896) 00:14:54.813 fused_ordering(897) 00:14:54.813 fused_ordering(898) 00:14:54.813 fused_ordering(899) 00:14:54.813 fused_ordering(900) 00:14:54.813 fused_ordering(901) 00:14:54.813 fused_ordering(902) 00:14:54.813 fused_ordering(903) 00:14:54.813 fused_ordering(904) 00:14:54.813 fused_ordering(905) 00:14:54.813 fused_ordering(906) 00:14:54.813 fused_ordering(907) 00:14:54.813 fused_ordering(908) 00:14:54.813 fused_ordering(909) 00:14:54.813 fused_ordering(910) 00:14:54.813 fused_ordering(911) 00:14:54.813 fused_ordering(912) 00:14:54.813 fused_ordering(913) 00:14:54.813 fused_ordering(914) 00:14:54.813 fused_ordering(915) 00:14:54.813 fused_ordering(916) 00:14:54.813 fused_ordering(917) 00:14:54.813 fused_ordering(918) 00:14:54.813 fused_ordering(919) 00:14:54.813 fused_ordering(920) 00:14:54.813 fused_ordering(921) 00:14:54.813 fused_ordering(922) 00:14:54.813 fused_ordering(923) 00:14:54.813 fused_ordering(924) 00:14:54.813 fused_ordering(925) 00:14:54.813 fused_ordering(926) 00:14:54.813 fused_ordering(927) 00:14:54.813 fused_ordering(928) 00:14:54.813 fused_ordering(929) 00:14:54.813 fused_ordering(930) 00:14:54.813 fused_ordering(931) 00:14:54.813 fused_ordering(932) 00:14:54.813 fused_ordering(933) 00:14:54.813 fused_ordering(934) 00:14:54.813 fused_ordering(935) 00:14:54.813 fused_ordering(936) 00:14:54.813 fused_ordering(937) 00:14:54.813 fused_ordering(938) 00:14:54.813 fused_ordering(939) 00:14:54.813 fused_ordering(940) 00:14:54.813 fused_ordering(941) 00:14:54.813 fused_ordering(942) 00:14:54.813 fused_ordering(943) 00:14:54.813 fused_ordering(944) 00:14:54.814 fused_ordering(945) 00:14:54.814 fused_ordering(946) 00:14:54.814 fused_ordering(947) 00:14:54.814 fused_ordering(948) 00:14:54.814 fused_ordering(949) 00:14:54.814 fused_ordering(950) 00:14:54.814 fused_ordering(951) 00:14:54.814 fused_ordering(952) 00:14:54.814 fused_ordering(953) 00:14:54.814 fused_ordering(954) 00:14:54.814 fused_ordering(955) 00:14:54.814 fused_ordering(956) 00:14:54.814 fused_ordering(957) 00:14:54.814 fused_ordering(958) 00:14:54.814 fused_ordering(959) 00:14:54.814 fused_ordering(960) 00:14:54.814 fused_ordering(961) 00:14:54.814 fused_ordering(962) 00:14:54.814 fused_ordering(963) 00:14:54.814 fused_ordering(964) 00:14:54.814 fused_ordering(965) 00:14:54.814 fused_ordering(966) 00:14:54.814 fused_ordering(967) 00:14:54.814 fused_ordering(968) 00:14:54.814 fused_ordering(969) 00:14:54.814 fused_ordering(970) 00:14:54.814 fused_ordering(971) 00:14:54.814 fused_ordering(972) 00:14:54.814 fused_ordering(973) 00:14:54.814 fused_ordering(974) 00:14:54.814 fused_ordering(975) 00:14:54.814 fused_ordering(976) 00:14:54.814 fused_ordering(977) 00:14:54.814 fused_ordering(978) 00:14:54.814 fused_ordering(979) 00:14:54.814 fused_ordering(980) 00:14:54.814 fused_ordering(981) 00:14:54.814 fused_ordering(982) 00:14:54.814 fused_ordering(983) 00:14:54.814 fused_ordering(984) 00:14:54.814 fused_ordering(985) 00:14:54.814 fused_ordering(986) 00:14:54.814 fused_ordering(987) 00:14:54.814 fused_ordering(988) 00:14:54.814 fused_ordering(989) 00:14:54.814 fused_ordering(990) 00:14:54.814 fused_ordering(991) 00:14:54.814 fused_ordering(992) 00:14:54.814 fused_ordering(993) 00:14:54.814 fused_ordering(994) 00:14:54.814 fused_ordering(995) 00:14:54.814 fused_ordering(996) 00:14:54.814 fused_ordering(997) 00:14:54.814 fused_ordering(998) 00:14:54.814 fused_ordering(999) 00:14:54.814 fused_ordering(1000) 00:14:54.814 fused_ordering(1001) 00:14:54.814 fused_ordering(1002) 00:14:54.814 fused_ordering(1003) 00:14:54.814 fused_ordering(1004) 00:14:54.814 fused_ordering(1005) 00:14:54.814 fused_ordering(1006) 00:14:54.814 fused_ordering(1007) 00:14:54.814 fused_ordering(1008) 00:14:54.814 fused_ordering(1009) 00:14:54.814 fused_ordering(1010) 00:14:54.814 fused_ordering(1011) 00:14:54.814 fused_ordering(1012) 00:14:54.814 fused_ordering(1013) 00:14:54.814 fused_ordering(1014) 00:14:54.814 fused_ordering(1015) 00:14:54.814 fused_ordering(1016) 00:14:54.814 fused_ordering(1017) 00:14:54.814 fused_ordering(1018) 00:14:54.814 fused_ordering(1019) 00:14:54.814 fused_ordering(1020) 00:14:54.814 fused_ordering(1021) 00:14:54.814 fused_ordering(1022) 00:14:54.814 fused_ordering(1023) 00:14:55.073 05:10:32 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:55.073 05:10:32 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:55.073 05:10:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:55.073 05:10:32 -- nvmf/common.sh@117 -- # sync 00:14:55.073 05:10:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.073 05:10:32 -- nvmf/common.sh@120 -- # set +e 00:14:55.073 05:10:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.073 05:10:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.073 rmmod nvme_tcp 00:14:55.073 rmmod nvme_fabrics 00:14:55.073 rmmod nvme_keyring 00:14:55.073 05:10:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.073 05:10:32 -- nvmf/common.sh@124 -- # set -e 00:14:55.073 05:10:32 -- nvmf/common.sh@125 -- # return 0 00:14:55.073 05:10:32 -- nvmf/common.sh@478 -- # '[' -n 1846675 ']' 00:14:55.073 05:10:32 -- nvmf/common.sh@479 -- # killprocess 1846675 00:14:55.073 05:10:32 -- common/autotest_common.sh@936 -- # '[' -z 1846675 ']' 00:14:55.073 05:10:32 -- common/autotest_common.sh@940 -- # kill -0 1846675 00:14:55.073 05:10:32 -- common/autotest_common.sh@941 -- # uname 00:14:55.073 05:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.073 05:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1846675 00:14:55.073 05:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:55.073 05:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:55.073 05:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1846675' 00:14:55.073 killing process with pid 1846675 00:14:55.073 05:10:32 -- common/autotest_common.sh@955 -- # kill 1846675 00:14:55.073 05:10:32 -- common/autotest_common.sh@960 -- # wait 1846675 00:14:55.331 05:10:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:55.331 05:10:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:55.331 05:10:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:55.331 05:10:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.331 05:10:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.331 05:10:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.331 05:10:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.331 05:10:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.232 05:10:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.232 00:14:57.232 real 0m7.891s 00:14:57.232 user 0m5.591s 00:14:57.232 sys 0m3.498s 00:14:57.232 05:10:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.232 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.232 ************************************ 00:14:57.232 END TEST nvmf_fused_ordering 00:14:57.232 ************************************ 00:14:57.232 05:10:34 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:57.232 05:10:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:57.232 05:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.232 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.491 ************************************ 00:14:57.491 START TEST nvmf_delete_subsystem 00:14:57.491 ************************************ 00:14:57.491 05:10:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:57.491 * Looking for test storage... 00:14:57.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.491 05:10:34 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.491 05:10:34 -- nvmf/common.sh@7 -- # uname -s 00:14:57.491 05:10:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.491 05:10:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.491 05:10:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.491 05:10:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.491 05:10:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.491 05:10:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.491 05:10:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.491 05:10:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.491 05:10:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.491 05:10:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.491 05:10:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.491 05:10:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.491 05:10:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.491 05:10:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.491 05:10:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.492 05:10:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.492 05:10:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.492 05:10:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.492 05:10:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.492 05:10:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.492 05:10:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.492 05:10:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.492 05:10:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.492 05:10:34 -- paths/export.sh@5 -- # export PATH 00:14:57.492 05:10:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.492 05:10:34 -- nvmf/common.sh@47 -- # : 0 00:14:57.492 05:10:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.492 05:10:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.492 05:10:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.492 05:10:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.492 05:10:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.492 05:10:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.492 05:10:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.492 05:10:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.492 05:10:34 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:57.492 05:10:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:57.492 05:10:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.492 05:10:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:57.492 05:10:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:57.492 05:10:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:57.492 05:10:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.492 05:10:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.492 05:10:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.492 05:10:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:57.492 05:10:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:57.492 05:10:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.492 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:15:00.025 05:10:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:00.025 05:10:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:00.025 05:10:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:00.025 05:10:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:00.025 05:10:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:00.025 05:10:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:00.025 05:10:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:00.025 05:10:36 -- nvmf/common.sh@295 -- # net_devs=() 00:15:00.025 05:10:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:00.025 05:10:36 -- nvmf/common.sh@296 -- # e810=() 00:15:00.025 05:10:36 -- nvmf/common.sh@296 -- # local -ga e810 00:15:00.025 05:10:36 -- nvmf/common.sh@297 -- # x722=() 00:15:00.025 05:10:36 -- nvmf/common.sh@297 -- # local -ga x722 00:15:00.025 05:10:36 -- nvmf/common.sh@298 -- # mlx=() 00:15:00.025 05:10:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:00.025 05:10:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.025 05:10:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.025 05:10:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:00.025 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:00.025 05:10:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.025 05:10:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:00.025 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:00.025 05:10:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.025 05:10:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.025 05:10:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.025 05:10:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:00.025 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:00.025 05:10:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.025 05:10:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.025 05:10:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.025 05:10:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:00.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:00.025 05:10:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:00.025 05:10:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:00.025 05:10:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:00.025 05:10:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.025 05:10:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.025 05:10:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:00.025 05:10:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.025 05:10:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.025 05:10:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:00.025 05:10:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.025 05:10:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.025 05:10:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:00.025 05:10:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:00.025 05:10:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.025 05:10:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.025 05:10:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.025 05:10:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.025 05:10:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.025 05:10:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.025 05:10:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.025 05:10:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.025 05:10:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:15:00.025 00:15:00.025 --- 10.0.0.2 ping statistics --- 00:15:00.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.025 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:15:00.025 05:10:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:00.025 00:15:00.025 --- 10.0.0.1 ping statistics --- 00:15:00.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.025 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:00.025 05:10:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.025 05:10:36 -- nvmf/common.sh@411 -- # return 0 00:15:00.025 05:10:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:00.026 05:10:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.026 05:10:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:00.026 05:10:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:00.026 05:10:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.026 05:10:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:00.026 05:10:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:00.026 05:10:36 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:00.026 05:10:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:00.026 05:10:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:00.026 05:10:36 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 05:10:36 -- nvmf/common.sh@470 -- # nvmfpid=1849027 00:15:00.026 05:10:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:00.026 05:10:36 -- nvmf/common.sh@471 -- # waitforlisten 1849027 00:15:00.026 05:10:36 -- common/autotest_common.sh@817 -- # '[' -z 1849027 ']' 00:15:00.026 05:10:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.026 05:10:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.026 05:10:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.026 05:10:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.026 05:10:36 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 [2024-04-24 05:10:36.915663] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:15:00.026 [2024-04-24 05:10:36.915726] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.026 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.026 [2024-04-24 05:10:36.950960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.026 [2024-04-24 05:10:36.983391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:00.026 [2024-04-24 05:10:37.081456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.026 [2024-04-24 05:10:37.081522] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.026 [2024-04-24 05:10:37.081539] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.026 [2024-04-24 05:10:37.081553] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.026 [2024-04-24 05:10:37.081565] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.026 [2024-04-24 05:10:37.084652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.026 [2024-04-24 05:10:37.084665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.026 05:10:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.026 05:10:37 -- common/autotest_common.sh@850 -- # return 0 00:15:00.026 05:10:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:00.026 05:10:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 05:10:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 [2024-04-24 05:10:37.232154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 [2024-04-24 05:10:37.248421] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 NULL1 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 Delay0 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.026 05:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.026 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 05:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@28 -- # perf_pid=1849051 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:00.026 05:10:37 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:00.284 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.284 [2024-04-24 05:10:37.323123] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:02.183 05:10:39 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.183 05:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.183 05:10:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 [2024-04-24 05:10:39.376852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x555e40 is same with the state(5) to be set 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Write completed with error (sct=0, sc=8) 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.183 starting I/O failed: -6 00:15:02.183 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 Read completed with error (sct=0, sc=8) 00:15:02.184 Write completed with error (sct=0, sc=8) 00:15:02.184 starting I/O failed: -6 00:15:02.184 [2024-04-24 05:10:39.378070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feda000bf90 is same with the state(5) to be set 00:15:03.120 [2024-04-24 05:10:40.338698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5551c0 is same with the state(5) to be set 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 [2024-04-24 05:10:40.378871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feda000c250 is same with the state(5) to be set 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 [2024-04-24 05:10:40.379080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f9f0 is same with the state(5) to be set 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 [2024-04-24 05:10:40.379394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feda0000c00 is same with the state(5) to be set 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Read completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 Write completed with error (sct=0, sc=8) 00:15:03.120 [2024-04-24 05:10:40.379584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56dbf0 is same with the state(5) to be set 00:15:03.120 [2024-04-24 05:10:40.380857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5551c0 (9): Bad file descriptor 00:15:03.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:03.120 05:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.120 05:10:40 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:03.120 05:10:40 -- target/delete_subsystem.sh@35 -- # kill -0 1849051 00:15:03.120 05:10:40 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:03.120 Initializing NVMe Controllers 00:15:03.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.120 Controller IO queue size 128, less than required. 00:15:03.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:03.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:03.120 Initialization complete. Launching workers. 00:15:03.120 ======================================================== 00:15:03.120 Latency(us) 00:15:03.120 Device Information : IOPS MiB/s Average min max 00:15:03.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.61 0.08 907194.21 581.63 1014264.65 00:15:03.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.93 0.09 920416.63 694.39 2002424.30 00:15:03.120 ======================================================== 00:15:03.120 Total : 350.55 0.17 914207.52 581.63 2002424.30 00:15:03.120 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@35 -- # kill -0 1849051 00:15:03.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1849051) - No such process 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@45 -- # NOT wait 1849051 00:15:03.689 05:10:40 -- common/autotest_common.sh@638 -- # local es=0 00:15:03.689 05:10:40 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1849051 00:15:03.689 05:10:40 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:03.689 05:10:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:03.689 05:10:40 -- common/autotest_common.sh@630 -- # type -t wait 00:15:03.689 05:10:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:03.689 05:10:40 -- common/autotest_common.sh@641 -- # wait 1849051 00:15:03.689 05:10:40 -- common/autotest_common.sh@641 -- # es=1 00:15:03.689 05:10:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:03.689 05:10:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:03.689 05:10:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:03.689 05:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.689 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.689 05:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.689 05:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.689 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.689 [2024-04-24 05:10:40.904176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.689 05:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.689 05:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.689 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.689 05:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@54 -- # perf_pid=1849572 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.689 05:10:40 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:03.689 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.947 [2024-04-24 05:10:40.966339] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:04.205 05:10:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.205 05:10:41 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:04.205 05:10:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.770 05:10:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.770 05:10:41 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:04.770 05:10:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.335 05:10:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.335 05:10:42 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:05.335 05:10:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.899 05:10:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.899 05:10:42 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:05.899 05:10:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.467 05:10:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.467 05:10:43 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:06.467 05:10:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.727 05:10:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.727 05:10:43 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:06.727 05:10:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.984 Initializing NVMe Controllers 00:15:06.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.984 Controller IO queue size 128, less than required. 00:15:06.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:06.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:06.984 Initialization complete. Launching workers. 00:15:06.984 ======================================================== 00:15:06.984 Latency(us) 00:15:06.984 Device Information : IOPS MiB/s Average min max 00:15:06.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004665.51 1000247.79 1013321.24 00:15:06.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005867.48 1000205.55 1041805.00 00:15:06.984 ======================================================== 00:15:06.984 Total : 256.00 0.12 1005266.50 1000205.55 1041805.00 00:15:06.984 00:15:07.240 05:10:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.240 05:10:44 -- target/delete_subsystem.sh@57 -- # kill -0 1849572 00:15:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1849572) - No such process 00:15:07.240 05:10:44 -- target/delete_subsystem.sh@67 -- # wait 1849572 00:15:07.240 05:10:44 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:07.240 05:10:44 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:07.240 05:10:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:07.240 05:10:44 -- nvmf/common.sh@117 -- # sync 00:15:07.240 05:10:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.240 05:10:44 -- nvmf/common.sh@120 -- # set +e 00:15:07.240 05:10:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.240 05:10:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.240 rmmod nvme_tcp 00:15:07.240 rmmod nvme_fabrics 00:15:07.240 rmmod nvme_keyring 00:15:07.240 05:10:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.240 05:10:44 -- nvmf/common.sh@124 -- # set -e 00:15:07.240 05:10:44 -- nvmf/common.sh@125 -- # return 0 00:15:07.240 05:10:44 -- nvmf/common.sh@478 -- # '[' -n 1849027 ']' 00:15:07.240 05:10:44 -- nvmf/common.sh@479 -- # killprocess 1849027 00:15:07.241 05:10:44 -- common/autotest_common.sh@936 -- # '[' -z 1849027 ']' 00:15:07.241 05:10:44 -- common/autotest_common.sh@940 -- # kill -0 1849027 00:15:07.241 05:10:44 -- common/autotest_common.sh@941 -- # uname 00:15:07.241 05:10:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.241 05:10:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1849027 00:15:07.499 05:10:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.499 05:10:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.499 05:10:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1849027' 00:15:07.499 killing process with pid 1849027 00:15:07.499 05:10:44 -- common/autotest_common.sh@955 -- # kill 1849027 00:15:07.499 05:10:44 -- common/autotest_common.sh@960 -- # wait 1849027 00:15:07.499 05:10:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:07.499 05:10:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:07.499 05:10:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:07.499 05:10:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.499 05:10:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.499 05:10:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.499 05:10:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.499 05:10:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.033 05:10:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.033 00:15:10.033 real 0m12.213s 00:15:10.033 user 0m27.619s 00:15:10.033 sys 0m2.914s 00:15:10.033 05:10:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.033 05:10:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.033 ************************************ 00:15:10.033 END TEST nvmf_delete_subsystem 00:15:10.033 ************************************ 00:15:10.033 05:10:46 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.033 05:10:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:10.033 05:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.033 05:10:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.033 ************************************ 00:15:10.033 START TEST nvmf_ns_masking 00:15:10.033 ************************************ 00:15:10.033 05:10:46 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.033 * Looking for test storage... 00:15:10.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.033 05:10:46 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.033 05:10:46 -- nvmf/common.sh@7 -- # uname -s 00:15:10.033 05:10:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.033 05:10:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.034 05:10:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.034 05:10:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.034 05:10:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.034 05:10:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.034 05:10:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.034 05:10:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.034 05:10:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.034 05:10:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.034 05:10:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.034 05:10:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.034 05:10:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.034 05:10:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.034 05:10:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.034 05:10:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.034 05:10:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.034 05:10:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.034 05:10:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.034 05:10:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.034 05:10:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.034 05:10:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.034 05:10:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.034 05:10:47 -- paths/export.sh@5 -- # export PATH 00:15:10.034 05:10:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.034 05:10:47 -- nvmf/common.sh@47 -- # : 0 00:15:10.034 05:10:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.034 05:10:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.034 05:10:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.034 05:10:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.034 05:10:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.034 05:10:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.034 05:10:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.034 05:10:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.034 05:10:47 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.034 05:10:47 -- target/ns_masking.sh@11 -- # loops=5 00:15:10.034 05:10:47 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:10.034 05:10:47 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:10.034 05:10:47 -- target/ns_masking.sh@15 -- # uuidgen 00:15:10.034 05:10:47 -- target/ns_masking.sh@15 -- # HOSTID=abf7e9fd-c8e9-4eec-bea8-0d63bb50fd80 00:15:10.034 05:10:47 -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:10.034 05:10:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:10.034 05:10:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.034 05:10:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:10.034 05:10:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:10.034 05:10:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:10.034 05:10:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.034 05:10:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.034 05:10:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.034 05:10:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:10.034 05:10:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:10.034 05:10:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.034 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:15:11.934 05:10:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.934 05:10:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.934 05:10:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.934 05:10:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.934 05:10:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.934 05:10:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.934 05:10:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.934 05:10:48 -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.934 05:10:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.934 05:10:48 -- nvmf/common.sh@296 -- # e810=() 00:15:11.934 05:10:48 -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.934 05:10:48 -- nvmf/common.sh@297 -- # x722=() 00:15:11.934 05:10:48 -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.934 05:10:48 -- nvmf/common.sh@298 -- # mlx=() 00:15:11.934 05:10:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.934 05:10:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.934 05:10:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.934 05:10:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.934 05:10:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.934 05:10:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.934 05:10:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:11.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:11.934 05:10:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.934 05:10:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:11.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:11.934 05:10:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.934 05:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.935 05:10:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.935 05:10:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.935 05:10:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.935 05:10:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.935 05:10:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:11.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:11.935 05:10:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.935 05:10:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.935 05:10:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.935 05:10:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.935 05:10:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.935 05:10:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:11.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:11.935 05:10:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.935 05:10:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:11.935 05:10:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:11.935 05:10:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:11.935 05:10:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:11.935 05:10:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.935 05:10:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.935 05:10:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.935 05:10:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.935 05:10:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.935 05:10:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.935 05:10:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.935 05:10:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.935 05:10:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.935 05:10:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.935 05:10:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.935 05:10:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.935 05:10:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.935 05:10:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.935 05:10:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.935 05:10:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.935 05:10:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.935 05:10:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.935 05:10:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.935 05:10:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:15:11.935 00:15:11.935 --- 10.0.0.2 ping statistics --- 00:15:11.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.935 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:11.935 05:10:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:15:11.935 00:15:11.935 --- 10.0.0.1 ping statistics --- 00:15:11.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.935 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:11.935 05:10:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.935 05:10:49 -- nvmf/common.sh@411 -- # return 0 00:15:11.935 05:10:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:11.935 05:10:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.935 05:10:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:11.935 05:10:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:11.935 05:10:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.935 05:10:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:11.935 05:10:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:11.935 05:10:49 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:11.935 05:10:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:11.935 05:10:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.935 05:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:11.935 05:10:49 -- nvmf/common.sh@470 -- # nvmfpid=1851923 00:15:11.935 05:10:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.935 05:10:49 -- nvmf/common.sh@471 -- # waitforlisten 1851923 00:15:11.935 05:10:49 -- common/autotest_common.sh@817 -- # '[' -z 1851923 ']' 00:15:11.935 05:10:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.935 05:10:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.935 05:10:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.935 05:10:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.935 05:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:11.935 [2024-04-24 05:10:49.134779] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:15:11.935 [2024-04-24 05:10:49.134858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.935 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.935 [2024-04-24 05:10:49.171236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.935 [2024-04-24 05:10:49.203172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.194 [2024-04-24 05:10:49.299970] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.194 [2024-04-24 05:10:49.300044] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.194 [2024-04-24 05:10:49.300060] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.194 [2024-04-24 05:10:49.300078] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.194 [2024-04-24 05:10:49.300091] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.194 [2024-04-24 05:10:49.300177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.194 [2024-04-24 05:10:49.300213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.194 [2024-04-24 05:10:49.300267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.194 [2024-04-24 05:10:49.300269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.194 05:10:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.194 05:10:49 -- common/autotest_common.sh@850 -- # return 0 00:15:12.194 05:10:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:12.194 05:10:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:12.194 05:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:12.194 05:10:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.194 05:10:49 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.452 [2024-04-24 05:10:49.721366] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.710 05:10:49 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:12.710 05:10:49 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:12.710 05:10:49 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:12.968 Malloc1 00:15:12.968 05:10:50 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:13.227 Malloc2 00:15:13.227 05:10:50 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:13.485 05:10:50 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:13.743 05:10:50 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.002 [2024-04-24 05:10:51.042119] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.002 05:10:51 -- target/ns_masking.sh@61 -- # connect 00:15:14.002 05:10:51 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abf7e9fd-c8e9-4eec-bea8-0d63bb50fd80 -a 10.0.0.2 -s 4420 -i 4 00:15:14.002 05:10:51 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.002 05:10:51 -- common/autotest_common.sh@1184 -- # local i=0 00:15:14.002 05:10:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.002 05:10:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:14.002 05:10:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:16.537 05:10:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:16.537 05:10:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:16.537 05:10:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.537 05:10:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:16.537 05:10:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.537 05:10:53 -- common/autotest_common.sh@1194 -- # return 0 00:15:16.537 05:10:53 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:16.537 05:10:53 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:16.538 05:10:53 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:16.538 05:10:53 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:16.538 05:10:53 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:16.538 [ 0]:0x1 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nguid=86b62789816e40beb22bc0e60719ba9f 00:15:16.538 05:10:53 -- target/ns_masking.sh@41 -- # [[ 86b62789816e40beb22bc0e60719ba9f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:16.538 05:10:53 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:16.538 05:10:53 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:16.538 [ 0]:0x1 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nguid=86b62789816e40beb22bc0e60719ba9f 00:15:16.538 05:10:53 -- target/ns_masking.sh@41 -- # [[ 86b62789816e40beb22bc0e60719ba9f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:16.538 05:10:53 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:16.538 05:10:53 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:16.538 [ 1]:0x2 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:16.538 05:10:53 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:16.538 05:10:53 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:16.538 05:10:53 -- target/ns_masking.sh@69 -- # disconnect 00:15:16.538 05:10:53 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.796 05:10:53 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.054 05:10:54 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:17.312 05:10:54 -- target/ns_masking.sh@77 -- # connect 1 00:15:17.312 05:10:54 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abf7e9fd-c8e9-4eec-bea8-0d63bb50fd80 -a 10.0.0.2 -s 4420 -i 4 00:15:17.312 05:10:54 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:17.312 05:10:54 -- common/autotest_common.sh@1184 -- # local i=0 00:15:17.312 05:10:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.312 05:10:54 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:17.312 05:10:54 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:17.312 05:10:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:19.219 05:10:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:19.219 05:10:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:19.219 05:10:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.219 05:10:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:19.219 05:10:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.219 05:10:56 -- common/autotest_common.sh@1194 -- # return 0 00:15:19.219 05:10:56 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:19.219 05:10:56 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:19.478 05:10:56 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:19.478 05:10:56 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:19.478 05:10:56 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:19.478 05:10:56 -- common/autotest_common.sh@638 -- # local es=0 00:15:19.478 05:10:56 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:19.478 05:10:56 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:19.478 05:10:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.478 05:10:56 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:19.478 05:10:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.478 05:10:56 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:19.478 05:10:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.478 05:10:56 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:19.478 05:10:56 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.478 05:10:56 -- common/autotest_common.sh@641 -- # es=1 00:15:19.478 05:10:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:19.478 05:10:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:19.478 05:10:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:19.478 05:10:56 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:19.478 05:10:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.478 05:10:56 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:19.478 [ 0]:0x2 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.478 05:10:56 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:19.478 05:10:56 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.478 05:10:56 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.736 05:10:56 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:19.736 05:10:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.736 05:10:56 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:19.736 [ 0]:0x1 00:15:19.736 05:10:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.736 05:10:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.736 05:10:56 -- target/ns_masking.sh@40 -- # nguid=86b62789816e40beb22bc0e60719ba9f 00:15:19.736 05:10:56 -- target/ns_masking.sh@41 -- # [[ 86b62789816e40beb22bc0e60719ba9f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.736 05:10:56 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:19.736 05:10:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.736 05:10:56 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:19.736 [ 1]:0x2 00:15:19.736 05:10:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.736 05:10:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.737 05:10:56 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:19.737 05:10:56 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.737 05:10:56 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.995 05:10:57 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:19.995 05:10:57 -- common/autotest_common.sh@638 -- # local es=0 00:15:19.995 05:10:57 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:19.995 05:10:57 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:19.995 05:10:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.995 05:10:57 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:19.995 05:10:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.995 05:10:57 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:19.995 05:10:57 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.995 05:10:57 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:19.995 05:10:57 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.995 05:10:57 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:19.995 05:10:57 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:19.995 05:10:57 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.995 05:10:57 -- common/autotest_common.sh@641 -- # es=1 00:15:19.995 05:10:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:19.995 05:10:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:19.995 05:10:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:19.995 05:10:57 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:19.995 05:10:57 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:19.995 05:10:57 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:19.995 [ 0]:0x2 00:15:19.995 05:10:57 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.995 05:10:57 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:20.254 05:10:57 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:20.254 05:10:57 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.254 05:10:57 -- target/ns_masking.sh@91 -- # disconnect 00:15:20.254 05:10:57 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.254 05:10:57 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:20.513 05:10:57 -- target/ns_masking.sh@95 -- # connect 2 00:15:20.513 05:10:57 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abf7e9fd-c8e9-4eec-bea8-0d63bb50fd80 -a 10.0.0.2 -s 4420 -i 4 00:15:20.513 05:10:57 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:20.513 05:10:57 -- common/autotest_common.sh@1184 -- # local i=0 00:15:20.513 05:10:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.513 05:10:57 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:20.513 05:10:57 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:20.513 05:10:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:22.420 05:10:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:22.420 05:10:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:22.420 05:10:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.678 05:10:59 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:22.678 05:10:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.678 05:10:59 -- common/autotest_common.sh@1194 -- # return 0 00:15:22.678 05:10:59 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:22.678 05:10:59 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:22.678 05:10:59 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:22.678 05:10:59 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:22.678 05:10:59 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:22.678 05:10:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.678 05:10:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.678 [ 0]:0x1 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # nguid=86b62789816e40beb22bc0e60719ba9f 00:15:22.678 05:10:59 -- target/ns_masking.sh@41 -- # [[ 86b62789816e40beb22bc0e60719ba9f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.678 05:10:59 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:22.678 05:10:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.678 05:10:59 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:22.678 [ 1]:0x2 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.678 05:10:59 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:22.678 05:10:59 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.678 05:10:59 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:22.937 05:11:00 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:22.937 05:11:00 -- common/autotest_common.sh@638 -- # local es=0 00:15:22.937 05:11:00 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:22.937 05:11:00 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:22.937 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.937 05:11:00 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:22.937 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.937 05:11:00 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:22.937 05:11:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.937 05:11:00 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.937 05:11:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.937 05:11:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.937 05:11:00 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:22.937 05:11:00 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.937 05:11:00 -- common/autotest_common.sh@641 -- # es=1 00:15:22.937 05:11:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:22.937 05:11:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:22.937 05:11:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:22.937 05:11:00 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:22.937 05:11:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.937 05:11:00 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:22.937 [ 0]:0x2 00:15:22.937 05:11:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.937 05:11:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.195 05:11:00 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:23.195 05:11:00 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.195 05:11:00 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:23.195 05:11:00 -- common/autotest_common.sh@638 -- # local es=0 00:15:23.195 05:11:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:23.195 05:11:00 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.195 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.195 05:11:00 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.195 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.195 05:11:00 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.195 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.195 05:11:00 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.195 05:11:00 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:23.195 05:11:00 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:23.195 [2024-04-24 05:11:00.451642] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:23.195 request: 00:15:23.195 { 00:15:23.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.195 "nsid": 2, 00:15:23.195 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.195 "method": "nvmf_ns_remove_host", 00:15:23.195 "req_id": 1 00:15:23.195 } 00:15:23.195 Got JSON-RPC error response 00:15:23.195 response: 00:15:23.195 { 00:15:23.195 "code": -32602, 00:15:23.195 "message": "Invalid parameters" 00:15:23.195 } 00:15:23.454 05:11:00 -- common/autotest_common.sh@641 -- # es=1 00:15:23.454 05:11:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:23.454 05:11:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:23.454 05:11:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:23.454 05:11:00 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:23.454 05:11:00 -- common/autotest_common.sh@638 -- # local es=0 00:15:23.454 05:11:00 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.454 05:11:00 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:23.454 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.454 05:11:00 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:23.454 05:11:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.454 05:11:00 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:23.454 05:11:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.454 05:11:00 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:23.454 05:11:00 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.454 05:11:00 -- common/autotest_common.sh@641 -- # es=1 00:15:23.454 05:11:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:23.454 05:11:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:23.454 05:11:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:23.454 05:11:00 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:23.454 05:11:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.454 05:11:00 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:23.454 [ 0]:0x2 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.454 05:11:00 -- target/ns_masking.sh@40 -- # nguid=54e42c0c12f7437ebf451c8297b474b2 00:15:23.454 05:11:00 -- target/ns_masking.sh@41 -- # [[ 54e42c0c12f7437ebf451c8297b474b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.454 05:11:00 -- target/ns_masking.sh@108 -- # disconnect 00:15:23.454 05:11:00 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.454 05:11:00 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.714 05:11:00 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:23.714 05:11:00 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:23.714 05:11:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:23.714 05:11:00 -- nvmf/common.sh@117 -- # sync 00:15:23.714 05:11:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.714 05:11:00 -- nvmf/common.sh@120 -- # set +e 00:15:23.714 05:11:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.714 05:11:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.714 rmmod nvme_tcp 00:15:23.714 rmmod nvme_fabrics 00:15:23.714 rmmod nvme_keyring 00:15:23.714 05:11:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.714 05:11:00 -- nvmf/common.sh@124 -- # set -e 00:15:23.714 05:11:00 -- nvmf/common.sh@125 -- # return 0 00:15:23.714 05:11:00 -- nvmf/common.sh@478 -- # '[' -n 1851923 ']' 00:15:23.714 05:11:00 -- nvmf/common.sh@479 -- # killprocess 1851923 00:15:23.714 05:11:00 -- common/autotest_common.sh@936 -- # '[' -z 1851923 ']' 00:15:23.714 05:11:00 -- common/autotest_common.sh@940 -- # kill -0 1851923 00:15:23.714 05:11:00 -- common/autotest_common.sh@941 -- # uname 00:15:23.714 05:11:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.714 05:11:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1851923 00:15:23.714 05:11:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.714 05:11:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.714 05:11:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1851923' 00:15:23.714 killing process with pid 1851923 00:15:23.714 05:11:00 -- common/autotest_common.sh@955 -- # kill 1851923 00:15:23.714 05:11:00 -- common/autotest_common.sh@960 -- # wait 1851923 00:15:23.974 05:11:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:23.974 05:11:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:23.974 05:11:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:23.974 05:11:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.974 05:11:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.974 05:11:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.974 05:11:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.974 05:11:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.515 05:11:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.515 00:15:26.515 real 0m16.300s 00:15:26.515 user 0m50.795s 00:15:26.515 sys 0m3.796s 00:15:26.515 05:11:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.515 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:26.515 ************************************ 00:15:26.515 END TEST nvmf_ns_masking 00:15:26.515 ************************************ 00:15:26.515 05:11:03 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:26.515 05:11:03 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:26.515 05:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:26.515 05:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.515 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:26.515 ************************************ 00:15:26.515 START TEST nvmf_nvme_cli 00:15:26.515 ************************************ 00:15:26.515 05:11:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:26.515 * Looking for test storage... 00:15:26.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.515 05:11:03 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.516 05:11:03 -- nvmf/common.sh@7 -- # uname -s 00:15:26.516 05:11:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.516 05:11:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.516 05:11:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.516 05:11:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.516 05:11:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.516 05:11:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.516 05:11:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.516 05:11:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.516 05:11:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.516 05:11:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.516 05:11:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.516 05:11:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.516 05:11:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.516 05:11:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.516 05:11:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.516 05:11:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.516 05:11:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.516 05:11:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.516 05:11:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.516 05:11:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.516 05:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.516 05:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.516 05:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.516 05:11:03 -- paths/export.sh@5 -- # export PATH 00:15:26.516 05:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.516 05:11:03 -- nvmf/common.sh@47 -- # : 0 00:15:26.516 05:11:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.516 05:11:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.516 05:11:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.516 05:11:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.516 05:11:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.516 05:11:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.516 05:11:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.516 05:11:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.516 05:11:03 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.516 05:11:03 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.516 05:11:03 -- target/nvme_cli.sh@14 -- # devs=() 00:15:26.516 05:11:03 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:26.516 05:11:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:26.516 05:11:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.516 05:11:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:26.516 05:11:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:26.516 05:11:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:26.516 05:11:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.516 05:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.516 05:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.516 05:11:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:26.516 05:11:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:26.516 05:11:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.516 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:28.422 05:11:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:28.422 05:11:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:28.422 05:11:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:28.422 05:11:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:28.422 05:11:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:28.422 05:11:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:28.422 05:11:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:28.422 05:11:05 -- nvmf/common.sh@295 -- # net_devs=() 00:15:28.422 05:11:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:28.422 05:11:05 -- nvmf/common.sh@296 -- # e810=() 00:15:28.422 05:11:05 -- nvmf/common.sh@296 -- # local -ga e810 00:15:28.422 05:11:05 -- nvmf/common.sh@297 -- # x722=() 00:15:28.422 05:11:05 -- nvmf/common.sh@297 -- # local -ga x722 00:15:28.422 05:11:05 -- nvmf/common.sh@298 -- # mlx=() 00:15:28.422 05:11:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:28.422 05:11:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.422 05:11:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:28.422 05:11:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:28.422 05:11:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.422 05:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:28.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:28.422 05:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.422 05:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:28.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:28.422 05:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.422 05:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.422 05:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.422 05:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:28.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:28.422 05:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.422 05:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.422 05:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.422 05:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.422 05:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:28.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:28.422 05:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.422 05:11:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:28.422 05:11:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:28.422 05:11:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:28.422 05:11:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.422 05:11:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.422 05:11:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.422 05:11:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:28.422 05:11:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.422 05:11:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.422 05:11:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:28.422 05:11:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.422 05:11:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.422 05:11:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:28.422 05:11:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:28.422 05:11:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.422 05:11:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.422 05:11:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.422 05:11:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.422 05:11:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:28.422 05:11:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.423 05:11:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.423 05:11:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.423 05:11:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:28.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:15:28.423 00:15:28.423 --- 10.0.0.2 ping statistics --- 00:15:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.423 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:28.423 05:11:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:15:28.423 00:15:28.423 --- 10.0.0.1 ping statistics --- 00:15:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.423 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:28.423 05:11:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.423 05:11:05 -- nvmf/common.sh@411 -- # return 0 00:15:28.423 05:11:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:28.423 05:11:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.423 05:11:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:28.423 05:11:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:28.423 05:11:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.423 05:11:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:28.423 05:11:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:28.423 05:11:05 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:28.423 05:11:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:28.423 05:11:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:28.423 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.423 05:11:05 -- nvmf/common.sh@470 -- # nvmfpid=1855371 00:15:28.423 05:11:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.423 05:11:05 -- nvmf/common.sh@471 -- # waitforlisten 1855371 00:15:28.423 05:11:05 -- common/autotest_common.sh@817 -- # '[' -z 1855371 ']' 00:15:28.423 05:11:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.423 05:11:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:28.423 05:11:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.423 05:11:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:28.423 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.423 [2024-04-24 05:11:05.646653] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:15:28.423 [2024-04-24 05:11:05.646738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.423 [2024-04-24 05:11:05.691092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:28.682 [2024-04-24 05:11:05.721730] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.682 [2024-04-24 05:11:05.816705] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.682 [2024-04-24 05:11:05.816761] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.682 [2024-04-24 05:11:05.816777] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.682 [2024-04-24 05:11:05.816791] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.682 [2024-04-24 05:11:05.816803] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.682 [2024-04-24 05:11:05.816888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.682 [2024-04-24 05:11:05.816943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.682 [2024-04-24 05:11:05.816993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.682 [2024-04-24 05:11:05.816996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.682 05:11:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:28.682 05:11:05 -- common/autotest_common.sh@850 -- # return 0 00:15:28.682 05:11:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:28.682 05:11:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:28.682 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 05:11:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.941 05:11:05 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.941 05:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 [2024-04-24 05:11:05.974576] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.941 05:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:05 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:28.941 05:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 Malloc0 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 Malloc1 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 [2024-04-24 05:11:06.060815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.941 05:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.941 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 05:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.941 05:11:06 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:29.200 00:15:29.200 Discovery Log Number of Records 2, Generation counter 2 00:15:29.200 =====Discovery Log Entry 0====== 00:15:29.200 trtype: tcp 00:15:29.200 adrfam: ipv4 00:15:29.200 subtype: current discovery subsystem 00:15:29.200 treq: not required 00:15:29.200 portid: 0 00:15:29.200 trsvcid: 4420 00:15:29.200 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:29.200 traddr: 10.0.0.2 00:15:29.200 eflags: explicit discovery connections, duplicate discovery information 00:15:29.200 sectype: none 00:15:29.200 =====Discovery Log Entry 1====== 00:15:29.200 trtype: tcp 00:15:29.200 adrfam: ipv4 00:15:29.200 subtype: nvme subsystem 00:15:29.200 treq: not required 00:15:29.200 portid: 0 00:15:29.200 trsvcid: 4420 00:15:29.200 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:29.200 traddr: 10.0.0.2 00:15:29.200 eflags: none 00:15:29.200 sectype: none 00:15:29.200 05:11:06 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:29.200 05:11:06 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:29.200 05:11:06 -- nvmf/common.sh@511 -- # local dev _ 00:15:29.200 05:11:06 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:29.200 05:11:06 -- nvmf/common.sh@510 -- # nvme list 00:15:29.200 05:11:06 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:29.200 05:11:06 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:29.200 05:11:06 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.200 05:11:06 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:29.200 05:11:06 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:29.200 05:11:06 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.770 05:11:06 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:29.770 05:11:06 -- common/autotest_common.sh@1184 -- # local i=0 00:15:29.770 05:11:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.770 05:11:06 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:29.770 05:11:06 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:29.770 05:11:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:31.673 05:11:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:31.673 05:11:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:31.673 05:11:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.673 05:11:08 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:31.673 05:11:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.673 05:11:08 -- common/autotest_common.sh@1194 -- # return 0 00:15:31.673 05:11:08 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:31.673 05:11:08 -- nvmf/common.sh@511 -- # local dev _ 00:15:31.673 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.673 05:11:08 -- nvmf/common.sh@510 -- # nvme list 00:15:31.932 05:11:08 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:31.932 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:08 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.932 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:31.932 05:11:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:31.932 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:31.932 05:11:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:31.932 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:08 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:31.932 /dev/nvme0n1 ]] 00:15:31.932 05:11:08 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:31.932 05:11:08 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:31.932 05:11:08 -- nvmf/common.sh@511 -- # local dev _ 00:15:31.932 05:11:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:08 -- nvmf/common.sh@510 -- # nvme list 00:15:31.932 05:11:09 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:31.932 05:11:09 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:09 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.932 05:11:09 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:09 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:31.932 05:11:09 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:31.932 05:11:09 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:09 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:31.932 05:11:09 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:31.932 05:11:09 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:31.932 05:11:09 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:31.932 05:11:09 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.191 05:11:09 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.191 05:11:09 -- common/autotest_common.sh@1205 -- # local i=0 00:15:32.191 05:11:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:32.191 05:11:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.191 05:11:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:32.191 05:11:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.191 05:11:09 -- common/autotest_common.sh@1217 -- # return 0 00:15:32.191 05:11:09 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:32.191 05:11:09 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.191 05:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:32.191 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:32.191 05:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:32.191 05:11:09 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:32.191 05:11:09 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:32.191 05:11:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:32.191 05:11:09 -- nvmf/common.sh@117 -- # sync 00:15:32.191 05:11:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.191 05:11:09 -- nvmf/common.sh@120 -- # set +e 00:15:32.191 05:11:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.191 05:11:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.191 rmmod nvme_tcp 00:15:32.191 rmmod nvme_fabrics 00:15:32.191 rmmod nvme_keyring 00:15:32.451 05:11:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.451 05:11:09 -- nvmf/common.sh@124 -- # set -e 00:15:32.451 05:11:09 -- nvmf/common.sh@125 -- # return 0 00:15:32.451 05:11:09 -- nvmf/common.sh@478 -- # '[' -n 1855371 ']' 00:15:32.451 05:11:09 -- nvmf/common.sh@479 -- # killprocess 1855371 00:15:32.451 05:11:09 -- common/autotest_common.sh@936 -- # '[' -z 1855371 ']' 00:15:32.451 05:11:09 -- common/autotest_common.sh@940 -- # kill -0 1855371 00:15:32.451 05:11:09 -- common/autotest_common.sh@941 -- # uname 00:15:32.451 05:11:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.451 05:11:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1855371 00:15:32.451 05:11:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:32.451 05:11:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:32.451 05:11:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1855371' 00:15:32.451 killing process with pid 1855371 00:15:32.451 05:11:09 -- common/autotest_common.sh@955 -- # kill 1855371 00:15:32.451 05:11:09 -- common/autotest_common.sh@960 -- # wait 1855371 00:15:32.711 05:11:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:32.711 05:11:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:32.711 05:11:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:32.711 05:11:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.711 05:11:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.711 05:11:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.711 05:11:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.711 05:11:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.641 05:11:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.641 00:15:34.641 real 0m8.466s 00:15:34.641 user 0m16.278s 00:15:34.641 sys 0m2.202s 00:15:34.641 05:11:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:34.641 05:11:11 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 ************************************ 00:15:34.641 END TEST nvmf_nvme_cli 00:15:34.641 ************************************ 00:15:34.641 05:11:11 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:34.641 05:11:11 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:34.641 05:11:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:34.641 05:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.641 05:11:11 -- common/autotest_common.sh@10 -- # set +x 00:15:34.900 ************************************ 00:15:34.900 START TEST nvmf_vfio_user 00:15:34.900 ************************************ 00:15:34.900 05:11:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:34.900 * Looking for test storage... 00:15:34.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.900 05:11:12 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.900 05:11:12 -- nvmf/common.sh@7 -- # uname -s 00:15:34.900 05:11:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.900 05:11:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.900 05:11:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.900 05:11:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.900 05:11:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.900 05:11:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.900 05:11:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.900 05:11:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.900 05:11:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.900 05:11:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.900 05:11:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.900 05:11:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.900 05:11:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.900 05:11:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.900 05:11:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.900 05:11:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.900 05:11:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.900 05:11:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.900 05:11:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.900 05:11:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.900 05:11:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 05:11:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 05:11:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 05:11:12 -- paths/export.sh@5 -- # export PATH 00:15:34.900 05:11:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 05:11:12 -- nvmf/common.sh@47 -- # : 0 00:15:34.900 05:11:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.900 05:11:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.900 05:11:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.900 05:11:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.900 05:11:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.900 05:11:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.900 05:11:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.900 05:11:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.900 05:11:12 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:34.900 05:11:12 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:34.900 05:11:12 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:34.900 05:11:12 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1856311 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1856311' 00:15:34.901 Process pid: 1856311 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.901 05:11:12 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1856311 00:15:34.901 05:11:12 -- common/autotest_common.sh@817 -- # '[' -z 1856311 ']' 00:15:34.901 05:11:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.901 05:11:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:34.901 05:11:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.901 05:11:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:34.901 05:11:12 -- common/autotest_common.sh@10 -- # set +x 00:15:34.901 [2024-04-24 05:11:12.075392] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:15:34.901 [2024-04-24 05:11:12.075479] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.901 [2024-04-24 05:11:12.108104] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:34.901 [2024-04-24 05:11:12.133972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.160 [2024-04-24 05:11:12.217868] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.160 [2024-04-24 05:11:12.217921] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.160 [2024-04-24 05:11:12.217950] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.160 [2024-04-24 05:11:12.217962] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.160 [2024-04-24 05:11:12.217973] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.160 [2024-04-24 05:11:12.218032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.160 [2024-04-24 05:11:12.218059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.160 [2024-04-24 05:11:12.218116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.160 [2024-04-24 05:11:12.218118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.160 05:11:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:35.160 05:11:12 -- common/autotest_common.sh@850 -- # return 0 00:15:35.160 05:11:12 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:36.099 05:11:13 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:36.358 05:11:13 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.358 05:11:13 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.617 05:11:13 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.617 05:11:13 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.617 05:11:13 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.617 Malloc1 00:15:36.875 05:11:13 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.875 05:11:14 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:37.133 05:11:14 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:37.391 05:11:14 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.391 05:11:14 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:37.391 05:11:14 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.649 Malloc2 00:15:37.649 05:11:14 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:37.907 05:11:15 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:38.165 05:11:15 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:38.424 05:11:15 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:38.424 [2024-04-24 05:11:15.624760] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:15:38.424 [2024-04-24 05:11:15.624798] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856726 ] 00:15:38.424 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.424 [2024-04-24 05:11:15.641404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.424 [2024-04-24 05:11:15.659006] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:38.424 [2024-04-24 05:11:15.667130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.424 [2024-04-24 05:11:15.667159] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb077bcb000 00:15:38.424 [2024-04-24 05:11:15.668125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.669116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.670125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.671134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.672135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.673143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.674146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.675158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.424 [2024-04-24 05:11:15.676162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.424 [2024-04-24 05:11:15.676182] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb07697c000 00:15:38.424 [2024-04-24 05:11:15.677335] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.424 [2024-04-24 05:11:15.693348] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:38.424 [2024-04-24 05:11:15.693405] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:38.687 [2024-04-24 05:11:15.698325] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:38.687 [2024-04-24 05:11:15.698377] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:38.687 [2024-04-24 05:11:15.698472] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:38.687 [2024-04-24 05:11:15.698513] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:38.687 [2024-04-24 05:11:15.698524] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:38.687 [2024-04-24 05:11:15.699310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:38.687 [2024-04-24 05:11:15.699331] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:38.687 [2024-04-24 05:11:15.699343] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:38.687 [2024-04-24 05:11:15.700309] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:38.687 [2024-04-24 05:11:15.700327] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:38.687 [2024-04-24 05:11:15.700340] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.701320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:38.687 [2024-04-24 05:11:15.701338] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.702323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:38.687 [2024-04-24 05:11:15.702343] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:38.687 [2024-04-24 05:11:15.702352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.702364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.702473] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:38.687 [2024-04-24 05:11:15.702481] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.702489] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:38.687 [2024-04-24 05:11:15.703333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:38.687 [2024-04-24 05:11:15.704335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:38.687 [2024-04-24 05:11:15.705344] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:38.687 [2024-04-24 05:11:15.706338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:38.687 [2024-04-24 05:11:15.706431] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:38.687 [2024-04-24 05:11:15.707356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:38.687 [2024-04-24 05:11:15.707374] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:38.687 [2024-04-24 05:11:15.707387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707411] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:38.687 [2024-04-24 05:11:15.707430] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707461] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.687 [2024-04-24 05:11:15.707471] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.687 [2024-04-24 05:11:15.707494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.707544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.707562] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:38.687 [2024-04-24 05:11:15.707571] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:38.687 [2024-04-24 05:11:15.707579] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:38.687 [2024-04-24 05:11:15.707586] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:38.687 [2024-04-24 05:11:15.707594] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:38.687 [2024-04-24 05:11:15.707602] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:38.687 [2024-04-24 05:11:15.707625] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707647] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.707678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.707701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.687 [2024-04-24 05:11:15.707715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.687 [2024-04-24 05:11:15.707727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.687 [2024-04-24 05:11:15.707739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.687 [2024-04-24 05:11:15.707748] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.707790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.707801] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:38.687 [2024-04-24 05:11:15.707814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707831] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707844] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.707871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.707937] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707952] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.707966] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:38.687 [2024-04-24 05:11:15.707974] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:38.687 [2024-04-24 05:11:15.707983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.707999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.708018] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:38.687 [2024-04-24 05:11:15.708034] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.708048] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.708059] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.687 [2024-04-24 05:11:15.708067] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.687 [2024-04-24 05:11:15.708076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.687 [2024-04-24 05:11:15.708096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:38.687 [2024-04-24 05:11:15.708119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:38.687 [2024-04-24 05:11:15.708134] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708145] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.688 [2024-04-24 05:11:15.708152] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.688 [2024-04-24 05:11:15.708162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708187] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708217] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708227] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708236] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708245] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:38.688 [2024-04-24 05:11:15.708252] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:38.688 [2024-04-24 05:11:15.708260] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:38.688 [2024-04-24 05:11:15.708287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708408] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:38.688 [2024-04-24 05:11:15.708417] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:38.688 [2024-04-24 05:11:15.708423] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:38.688 [2024-04-24 05:11:15.708429] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:38.688 [2024-04-24 05:11:15.708438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:38.688 [2024-04-24 05:11:15.708449] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:38.688 [2024-04-24 05:11:15.708457] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:38.688 [2024-04-24 05:11:15.708466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708476] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:38.688 [2024-04-24 05:11:15.708484] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.688 [2024-04-24 05:11:15.708493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708504] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:38.688 [2024-04-24 05:11:15.708515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:38.688 [2024-04-24 05:11:15.708524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:38.688 [2024-04-24 05:11:15.708536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:38.688 [2024-04-24 05:11:15.708584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:38.688 ===================================================== 00:15:38.688 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.688 ===================================================== 00:15:38.688 Controller Capabilities/Features 00:15:38.688 ================================ 00:15:38.688 Vendor ID: 4e58 00:15:38.688 Subsystem Vendor ID: 4e58 00:15:38.688 Serial Number: SPDK1 00:15:38.688 Model Number: SPDK bdev Controller 00:15:38.688 Firmware Version: 24.05 00:15:38.688 Recommended Arb Burst: 6 00:15:38.688 IEEE OUI Identifier: 8d 6b 50 00:15:38.688 Multi-path I/O 00:15:38.688 May have multiple subsystem ports: Yes 00:15:38.688 May have multiple controllers: Yes 00:15:38.688 Associated with SR-IOV VF: No 00:15:38.688 Max Data Transfer Size: 131072 00:15:38.688 Max Number of Namespaces: 32 00:15:38.688 Max Number of I/O Queues: 127 00:15:38.688 NVMe Specification Version (VS): 1.3 00:15:38.688 NVMe Specification Version (Identify): 1.3 00:15:38.688 Maximum Queue Entries: 256 00:15:38.688 Contiguous Queues Required: Yes 00:15:38.688 Arbitration Mechanisms Supported 00:15:38.688 Weighted Round Robin: Not Supported 00:15:38.688 Vendor Specific: Not Supported 00:15:38.688 Reset Timeout: 15000 ms 00:15:38.688 Doorbell Stride: 4 bytes 00:15:38.688 NVM Subsystem Reset: Not Supported 00:15:38.688 Command Sets Supported 00:15:38.688 NVM Command Set: Supported 00:15:38.688 Boot Partition: Not Supported 00:15:38.688 Memory Page Size Minimum: 4096 bytes 00:15:38.688 Memory Page Size Maximum: 4096 bytes 00:15:38.688 Persistent Memory Region: Not Supported 00:15:38.688 Optional Asynchronous Events Supported 00:15:38.688 Namespace Attribute Notices: Supported 00:15:38.688 Firmware Activation Notices: Not Supported 00:15:38.688 ANA Change Notices: Not Supported 00:15:38.688 PLE Aggregate Log Change Notices: Not Supported 00:15:38.688 LBA Status Info Alert Notices: Not Supported 00:15:38.688 EGE Aggregate Log Change Notices: Not Supported 00:15:38.688 Normal NVM Subsystem Shutdown event: Not Supported 00:15:38.688 Zone Descriptor Change Notices: Not Supported 00:15:38.688 Discovery Log Change Notices: Not Supported 00:15:38.688 Controller Attributes 00:15:38.688 128-bit Host Identifier: Supported 00:15:38.688 Non-Operational Permissive Mode: Not Supported 00:15:38.688 NVM Sets: Not Supported 00:15:38.688 Read Recovery Levels: Not Supported 00:15:38.688 Endurance Groups: Not Supported 00:15:38.688 Predictable Latency Mode: Not Supported 00:15:38.688 Traffic Based Keep ALive: Not Supported 00:15:38.688 Namespace Granularity: Not Supported 00:15:38.688 SQ Associations: Not Supported 00:15:38.688 UUID List: Not Supported 00:15:38.688 Multi-Domain Subsystem: Not Supported 00:15:38.688 Fixed Capacity Management: Not Supported 00:15:38.688 Variable Capacity Management: Not Supported 00:15:38.688 Delete Endurance Group: Not Supported 00:15:38.688 Delete NVM Set: Not Supported 00:15:38.688 Extended LBA Formats Supported: Not Supported 00:15:38.688 Flexible Data Placement Supported: Not Supported 00:15:38.688 00:15:38.688 Controller Memory Buffer Support 00:15:38.688 ================================ 00:15:38.688 Supported: No 00:15:38.688 00:15:38.688 Persistent Memory Region Support 00:15:38.688 ================================ 00:15:38.688 Supported: No 00:15:38.688 00:15:38.688 Admin Command Set Attributes 00:15:38.688 ============================ 00:15:38.688 Security Send/Receive: Not Supported 00:15:38.688 Format NVM: Not Supported 00:15:38.688 Firmware Activate/Download: Not Supported 00:15:38.688 Namespace Management: Not Supported 00:15:38.688 Device Self-Test: Not Supported 00:15:38.688 Directives: Not Supported 00:15:38.688 NVMe-MI: Not Supported 00:15:38.688 Virtualization Management: Not Supported 00:15:38.688 Doorbell Buffer Config: Not Supported 00:15:38.688 Get LBA Status Capability: Not Supported 00:15:38.688 Command & Feature Lockdown Capability: Not Supported 00:15:38.688 Abort Command Limit: 4 00:15:38.688 Async Event Request Limit: 4 00:15:38.688 Number of Firmware Slots: N/A 00:15:38.688 Firmware Slot 1 Read-Only: N/A 00:15:38.688 Firmware Activation Without Reset: N/A 00:15:38.688 Multiple Update Detection Support: N/A 00:15:38.688 Firmware Update Granularity: No Information Provided 00:15:38.688 Per-Namespace SMART Log: No 00:15:38.688 Asymmetric Namespace Access Log Page: Not Supported 00:15:38.688 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:38.688 Command Effects Log Page: Supported 00:15:38.688 Get Log Page Extended Data: Supported 00:15:38.688 Telemetry Log Pages: Not Supported 00:15:38.688 Persistent Event Log Pages: Not Supported 00:15:38.688 Supported Log Pages Log Page: May Support 00:15:38.688 Commands Supported & Effects Log Page: Not Supported 00:15:38.688 Feature Identifiers & Effects Log Page:May Support 00:15:38.688 NVMe-MI Commands & Effects Log Page: May Support 00:15:38.688 Data Area 4 for Telemetry Log: Not Supported 00:15:38.688 Error Log Page Entries Supported: 128 00:15:38.689 Keep Alive: Supported 00:15:38.689 Keep Alive Granularity: 10000 ms 00:15:38.689 00:15:38.689 NVM Command Set Attributes 00:15:38.689 ========================== 00:15:38.689 Submission Queue Entry Size 00:15:38.689 Max: 64 00:15:38.689 Min: 64 00:15:38.689 Completion Queue Entry Size 00:15:38.689 Max: 16 00:15:38.689 Min: 16 00:15:38.689 Number of Namespaces: 32 00:15:38.689 Compare Command: Supported 00:15:38.689 Write Uncorrectable Command: Not Supported 00:15:38.689 Dataset Management Command: Supported 00:15:38.689 Write Zeroes Command: Supported 00:15:38.689 Set Features Save Field: Not Supported 00:15:38.689 Reservations: Not Supported 00:15:38.689 Timestamp: Not Supported 00:15:38.689 Copy: Supported 00:15:38.689 Volatile Write Cache: Present 00:15:38.689 Atomic Write Unit (Normal): 1 00:15:38.689 Atomic Write Unit (PFail): 1 00:15:38.689 Atomic Compare & Write Unit: 1 00:15:38.689 Fused Compare & Write: Supported 00:15:38.689 Scatter-Gather List 00:15:38.689 SGL Command Set: Supported (Dword aligned) 00:15:38.689 SGL Keyed: Not Supported 00:15:38.689 SGL Bit Bucket Descriptor: Not Supported 00:15:38.689 SGL Metadata Pointer: Not Supported 00:15:38.689 Oversized SGL: Not Supported 00:15:38.689 SGL Metadata Address: Not Supported 00:15:38.689 SGL Offset: Not Supported 00:15:38.689 Transport SGL Data Block: Not Supported 00:15:38.689 Replay Protected Memory Block: Not Supported 00:15:38.689 00:15:38.689 Firmware Slot Information 00:15:38.689 ========================= 00:15:38.689 Active slot: 1 00:15:38.689 Slot 1 Firmware Revision: 24.05 00:15:38.689 00:15:38.689 00:15:38.689 Commands Supported and Effects 00:15:38.689 ============================== 00:15:38.689 Admin Commands 00:15:38.689 -------------- 00:15:38.689 Get Log Page (02h): Supported 00:15:38.689 Identify (06h): Supported 00:15:38.689 Abort (08h): Supported 00:15:38.689 Set Features (09h): Supported 00:15:38.689 Get Features (0Ah): Supported 00:15:38.689 Asynchronous Event Request (0Ch): Supported 00:15:38.689 Keep Alive (18h): Supported 00:15:38.689 I/O Commands 00:15:38.689 ------------ 00:15:38.689 Flush (00h): Supported LBA-Change 00:15:38.689 Write (01h): Supported LBA-Change 00:15:38.689 Read (02h): Supported 00:15:38.689 Compare (05h): Supported 00:15:38.689 Write Zeroes (08h): Supported LBA-Change 00:15:38.689 Dataset Management (09h): Supported LBA-Change 00:15:38.689 Copy (19h): Supported LBA-Change 00:15:38.689 Unknown (79h): Supported LBA-Change 00:15:38.689 Unknown (7Ah): Supported 00:15:38.689 00:15:38.689 Error Log 00:15:38.689 ========= 00:15:38.689 00:15:38.689 Arbitration 00:15:38.689 =========== 00:15:38.689 Arbitration Burst: 1 00:15:38.689 00:15:38.689 Power Management 00:15:38.689 ================ 00:15:38.689 Number of Power States: 1 00:15:38.689 Current Power State: Power State #0 00:15:38.689 Power State #0: 00:15:38.689 Max Power: 0.00 W 00:15:38.689 Non-Operational State: Operational 00:15:38.689 Entry Latency: Not Reported 00:15:38.689 Exit Latency: Not Reported 00:15:38.689 Relative Read Throughput: 0 00:15:38.689 Relative Read Latency: 0 00:15:38.689 Relative Write Throughput: 0 00:15:38.689 Relative Write Latency: 0 00:15:38.689 Idle Power: Not Reported 00:15:38.689 Active Power: Not Reported 00:15:38.689 Non-Operational Permissive Mode: Not Supported 00:15:38.689 00:15:38.689 Health Information 00:15:38.689 ================== 00:15:38.689 Critical Warnings: 00:15:38.689 Available Spare Space: OK 00:15:38.689 Temperature: OK 00:15:38.689 Device Reliability: OK 00:15:38.689 Read Only: No 00:15:38.689 Volatile Memory Backup: OK 00:15:38.689 Current Temperature: 0 Kelvin (-2[2024-04-24 05:11:15.708750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:38.689 [2024-04-24 05:11:15.708768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:38.689 [2024-04-24 05:11:15.708809] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:38.689 [2024-04-24 05:11:15.708829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.689 [2024-04-24 05:11:15.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.689 [2024-04-24 05:11:15.708851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.689 [2024-04-24 05:11:15.708862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.689 [2024-04-24 05:11:15.712639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:38.689 [2024-04-24 05:11:15.712663] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:38.689 [2024-04-24 05:11:15.713377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.689 [2024-04-24 05:11:15.713444] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:38.689 [2024-04-24 05:11:15.713458] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:38.689 [2024-04-24 05:11:15.714392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:38.689 [2024-04-24 05:11:15.714415] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:38.689 [2024-04-24 05:11:15.714473] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:38.689 [2024-04-24 05:11:15.716429] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.689 73 Celsius) 00:15:38.689 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:38.689 Available Spare: 0% 00:15:38.689 Available Spare Threshold: 0% 00:15:38.689 Life Percentage Used: 0% 00:15:38.689 Data Units Read: 0 00:15:38.689 Data Units Written: 0 00:15:38.689 Host Read Commands: 0 00:15:38.689 Host Write Commands: 0 00:15:38.689 Controller Busy Time: 0 minutes 00:15:38.689 Power Cycles: 0 00:15:38.689 Power On Hours: 0 hours 00:15:38.689 Unsafe Shutdowns: 0 00:15:38.689 Unrecoverable Media Errors: 0 00:15:38.689 Lifetime Error Log Entries: 0 00:15:38.689 Warning Temperature Time: 0 minutes 00:15:38.689 Critical Temperature Time: 0 minutes 00:15:38.689 00:15:38.689 Number of Queues 00:15:38.689 ================ 00:15:38.689 Number of I/O Submission Queues: 127 00:15:38.689 Number of I/O Completion Queues: 127 00:15:38.689 00:15:38.689 Active Namespaces 00:15:38.689 ================= 00:15:38.689 Namespace ID:1 00:15:38.689 Error Recovery Timeout: Unlimited 00:15:38.689 Command Set Identifier: NVM (00h) 00:15:38.689 Deallocate: Supported 00:15:38.689 Deallocated/Unwritten Error: Not Supported 00:15:38.689 Deallocated Read Value: Unknown 00:15:38.689 Deallocate in Write Zeroes: Not Supported 00:15:38.689 Deallocated Guard Field: 0xFFFF 00:15:38.689 Flush: Supported 00:15:38.689 Reservation: Supported 00:15:38.689 Namespace Sharing Capabilities: Multiple Controllers 00:15:38.689 Size (in LBAs): 131072 (0GiB) 00:15:38.689 Capacity (in LBAs): 131072 (0GiB) 00:15:38.689 Utilization (in LBAs): 131072 (0GiB) 00:15:38.689 NGUID: 5F5F9CD7F7214282BA85FF30F3F28AC6 00:15:38.689 UUID: 5f5f9cd7-f721-4282-ba85-ff30f3f28ac6 00:15:38.689 Thin Provisioning: Not Supported 00:15:38.689 Per-NS Atomic Units: Yes 00:15:38.689 Atomic Boundary Size (Normal): 0 00:15:38.689 Atomic Boundary Size (PFail): 0 00:15:38.689 Atomic Boundary Offset: 0 00:15:38.689 Maximum Single Source Range Length: 65535 00:15:38.689 Maximum Copy Length: 65535 00:15:38.689 Maximum Source Range Count: 1 00:15:38.689 NGUID/EUI64 Never Reused: No 00:15:38.689 Namespace Write Protected: No 00:15:38.689 Number of LBA Formats: 1 00:15:38.689 Current LBA Format: LBA Format #00 00:15:38.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:38.689 00:15:38.689 05:11:15 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:38.689 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.689 [2024-04-24 05:11:15.945454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.968 [2024-04-24 05:11:20.968373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.968 Initializing NVMe Controllers 00:15:43.968 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:43.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:43.968 Initialization complete. Launching workers. 00:15:43.968 ======================================================== 00:15:43.968 Latency(us) 00:15:43.968 Device Information : IOPS MiB/s Average min max 00:15:43.968 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33825.39 132.13 3784.08 1171.10 7681.00 00:15:43.968 ======================================================== 00:15:43.968 Total : 33825.39 132.13 3784.08 1171.10 7681.00 00:15:43.968 00:15:43.968 05:11:21 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:43.968 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.968 [2024-04-24 05:11:21.200487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:49.238 [2024-04-24 05:11:26.237780] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.238 Initializing NVMe Controllers 00:15:49.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:49.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:49.238 Initialization complete. Launching workers. 00:15:49.238 ======================================================== 00:15:49.238 Latency(us) 00:15:49.238 Device Information : IOPS MiB/s Average min max 00:15:49.238 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.47 4000.14 11964.12 00:15:49.238 ======================================================== 00:15:49.238 Total : 16051.20 62.70 7984.47 4000.14 11964.12 00:15:49.238 00:15:49.238 05:11:26 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:49.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.238 [2024-04-24 05:11:26.449810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.520 [2024-04-24 05:11:31.505897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.520 Initializing NVMe Controllers 00:15:54.520 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.520 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:54.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:54.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:54.520 Initialization complete. Launching workers. 00:15:54.520 Starting thread on core 2 00:15:54.520 Starting thread on core 3 00:15:54.520 Starting thread on core 1 00:15:54.520 05:11:31 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:54.520 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.780 [2024-04-24 05:11:31.804111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.974 [2024-04-24 05:11:35.502277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.974 Initializing NVMe Controllers 00:15:58.974 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.974 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.974 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:58.974 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:58.974 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:58.974 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:58.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:58.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:58.974 Initialization complete. Launching workers. 00:15:58.974 Starting thread on core 1 with urgent priority queue 00:15:58.974 Starting thread on core 2 with urgent priority queue 00:15:58.974 Starting thread on core 3 with urgent priority queue 00:15:58.974 Starting thread on core 0 with urgent priority queue 00:15:58.974 SPDK bdev Controller (SPDK1 ) core 0: 4030.00 IO/s 24.81 secs/100000 ios 00:15:58.974 SPDK bdev Controller (SPDK1 ) core 1: 4746.67 IO/s 21.07 secs/100000 ios 00:15:58.974 SPDK bdev Controller (SPDK1 ) core 2: 4731.00 IO/s 21.14 secs/100000 ios 00:15:58.974 SPDK bdev Controller (SPDK1 ) core 3: 4549.67 IO/s 21.98 secs/100000 ios 00:15:58.974 ======================================================== 00:15:58.974 00:15:58.974 05:11:35 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:58.974 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.974 [2024-04-24 05:11:35.793151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.974 [2024-04-24 05:11:35.825702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.974 Initializing NVMe Controllers 00:15:58.974 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.974 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.974 Namespace ID: 1 size: 0GB 00:15:58.974 Initialization complete. 00:15:58.974 INFO: using host memory buffer for IO 00:15:58.974 Hello world! 00:15:58.974 05:11:35 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:58.974 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.974 [2024-04-24 05:11:36.102932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.913 Initializing NVMe Controllers 00:15:59.913 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.913 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.913 Initialization complete. Launching workers. 00:15:59.913 submit (in ns) avg, min, max = 7128.0, 3474.4, 4014608.9 00:15:59.913 complete (in ns) avg, min, max = 25684.8, 2040.0, 5996317.8 00:15:59.913 00:15:59.913 Submit histogram 00:15:59.913 ================ 00:15:59.913 Range in us Cumulative Count 00:15:59.913 3.461 - 3.484: 0.0298% ( 4) 00:15:59.913 3.484 - 3.508: 0.3944% ( 49) 00:15:59.913 3.508 - 3.532: 1.5702% ( 158) 00:15:59.913 3.532 - 3.556: 4.3310% ( 371) 00:15:59.913 3.556 - 3.579: 11.6907% ( 989) 00:15:59.913 3.579 - 3.603: 20.3602% ( 1165) 00:15:59.913 3.603 - 3.627: 31.6491% ( 1517) 00:15:59.913 3.627 - 3.650: 42.1119% ( 1406) 00:15:59.913 3.650 - 3.674: 51.2204% ( 1224) 00:15:59.913 3.674 - 3.698: 57.9402% ( 903) 00:15:59.913 3.698 - 3.721: 63.0302% ( 684) 00:15:59.913 3.721 - 3.745: 67.2719% ( 570) 00:15:59.913 3.745 - 3.769: 70.7992% ( 474) 00:15:59.913 3.769 - 3.793: 73.9991% ( 430) 00:15:59.913 3.793 - 3.816: 76.6409% ( 355) 00:15:59.913 3.816 - 3.840: 79.4761% ( 381) 00:15:59.913 3.840 - 3.864: 82.9588% ( 468) 00:15:59.913 3.864 - 3.887: 85.7717% ( 378) 00:15:59.913 3.887 - 3.911: 88.1307% ( 317) 00:15:59.913 3.911 - 3.935: 89.8348% ( 229) 00:15:59.913 3.935 - 3.959: 91.1520% ( 177) 00:15:59.913 3.959 - 3.982: 92.7445% ( 214) 00:15:59.913 3.982 - 4.006: 93.8979% ( 155) 00:15:59.913 4.006 - 4.030: 94.5304% ( 85) 00:15:59.913 4.030 - 4.053: 95.2151% ( 92) 00:15:59.913 4.053 - 4.077: 95.8402% ( 84) 00:15:59.913 4.077 - 4.101: 96.3536% ( 69) 00:15:59.913 4.101 - 4.124: 96.6662% ( 42) 00:15:59.913 4.124 - 4.148: 96.8373% ( 23) 00:15:59.913 4.148 - 4.172: 96.9787% ( 19) 00:15:59.913 4.172 - 4.196: 97.0829% ( 14) 00:15:59.913 4.196 - 4.219: 97.1499% ( 9) 00:15:59.913 4.219 - 4.243: 97.2466% ( 13) 00:15:59.913 4.243 - 4.267: 97.3806% ( 18) 00:15:59.913 4.267 - 4.290: 97.4996% ( 16) 00:15:59.913 4.290 - 4.314: 97.5740% ( 10) 00:15:59.913 4.314 - 4.338: 97.6559% ( 11) 00:15:59.913 4.338 - 4.361: 97.7080% ( 7) 00:15:59.913 4.361 - 4.385: 97.7601% ( 7) 00:15:59.913 4.385 - 4.409: 97.7824% ( 3) 00:15:59.913 4.409 - 4.433: 97.7973% ( 2) 00:15:59.913 4.433 - 4.456: 97.8047% ( 1) 00:15:59.913 4.456 - 4.480: 97.8271% ( 3) 00:15:59.913 4.504 - 4.527: 97.8345% ( 1) 00:15:59.913 4.527 - 4.551: 97.8494% ( 2) 00:15:59.913 4.551 - 4.575: 97.8643% ( 2) 00:15:59.913 4.599 - 4.622: 97.8791% ( 2) 00:15:59.913 4.622 - 4.646: 97.9164% ( 5) 00:15:59.913 4.646 - 4.670: 97.9536% ( 5) 00:15:59.913 4.670 - 4.693: 97.9759% ( 3) 00:15:59.913 4.693 - 4.717: 97.9908% ( 2) 00:15:59.913 4.717 - 4.741: 98.0057% ( 2) 00:15:59.913 4.741 - 4.764: 98.0354% ( 4) 00:15:59.913 4.764 - 4.788: 98.0875% ( 7) 00:15:59.913 4.788 - 4.812: 98.1173% ( 4) 00:15:59.913 4.812 - 4.836: 98.1768% ( 8) 00:15:59.913 4.836 - 4.859: 98.2438% ( 9) 00:15:59.913 4.859 - 4.883: 98.3108% ( 9) 00:15:59.913 4.883 - 4.907: 98.3629% ( 7) 00:15:59.913 4.907 - 4.930: 98.3703% ( 1) 00:15:59.913 4.930 - 4.954: 98.4075% ( 5) 00:15:59.913 4.954 - 4.978: 98.4447% ( 5) 00:15:59.913 4.978 - 5.001: 98.4819% ( 5) 00:15:59.913 5.001 - 5.025: 98.4968% ( 2) 00:15:59.913 5.025 - 5.049: 98.5117% ( 2) 00:15:59.913 5.049 - 5.073: 98.5414% ( 4) 00:15:59.913 5.096 - 5.120: 98.5638% ( 3) 00:15:59.913 5.120 - 5.144: 98.5861% ( 3) 00:15:59.913 5.144 - 5.167: 98.6084% ( 3) 00:15:59.913 5.167 - 5.191: 98.6233% ( 2) 00:15:59.913 5.215 - 5.239: 98.6382% ( 2) 00:15:59.913 5.310 - 5.333: 98.6456% ( 1) 00:15:59.913 5.333 - 5.357: 98.6531% ( 1) 00:15:59.913 5.357 - 5.381: 98.6605% ( 1) 00:15:59.913 5.404 - 5.428: 98.6754% ( 2) 00:15:59.913 5.428 - 5.452: 98.6828% ( 1) 00:15:59.913 5.665 - 5.689: 98.6903% ( 1) 00:15:59.913 6.258 - 6.305: 98.6977% ( 1) 00:15:59.913 6.353 - 6.400: 98.7052% ( 1) 00:15:59.913 6.637 - 6.684: 98.7200% ( 2) 00:15:59.913 6.874 - 6.921: 98.7498% ( 4) 00:15:59.913 6.969 - 7.016: 98.7573% ( 1) 00:15:59.913 7.016 - 7.064: 98.7721% ( 2) 00:15:59.913 7.064 - 7.111: 98.7796% ( 1) 00:15:59.913 7.159 - 7.206: 98.7870% ( 1) 00:15:59.913 7.206 - 7.253: 98.7945% ( 1) 00:15:59.913 7.253 - 7.301: 98.8019% ( 1) 00:15:59.913 7.396 - 7.443: 98.8093% ( 1) 00:15:59.913 7.490 - 7.538: 98.8168% ( 1) 00:15:59.913 7.538 - 7.585: 98.8242% ( 1) 00:15:59.913 7.585 - 7.633: 98.8391% ( 2) 00:15:59.913 7.633 - 7.680: 98.8466% ( 1) 00:15:59.913 7.775 - 7.822: 98.8540% ( 1) 00:15:59.913 7.964 - 8.012: 98.8689% ( 2) 00:15:59.913 8.012 - 8.059: 98.8763% ( 1) 00:15:59.913 8.059 - 8.107: 98.8986% ( 3) 00:15:59.913 8.107 - 8.154: 98.9061% ( 1) 00:15:59.913 8.154 - 8.201: 98.9135% ( 1) 00:15:59.913 8.344 - 8.391: 98.9210% ( 1) 00:15:59.913 8.439 - 8.486: 98.9284% ( 1) 00:15:59.913 8.486 - 8.533: 98.9359% ( 1) 00:15:59.913 8.533 - 8.581: 98.9433% ( 1) 00:15:59.913 8.676 - 8.723: 98.9507% ( 1) 00:15:59.913 8.723 - 8.770: 98.9582% ( 1) 00:15:59.913 8.770 - 8.818: 98.9656% ( 1) 00:15:59.913 8.865 - 8.913: 98.9731% ( 1) 00:15:59.913 8.913 - 8.960: 98.9805% ( 1) 00:15:59.913 9.102 - 9.150: 98.9879% ( 1) 00:15:59.913 9.197 - 9.244: 98.9954% ( 1) 00:15:59.913 9.244 - 9.292: 99.0028% ( 1) 00:15:59.913 9.387 - 9.434: 99.0103% ( 1) 00:15:59.914 9.434 - 9.481: 99.0177% ( 1) 00:15:59.914 9.481 - 9.529: 99.0252% ( 1) 00:15:59.914 9.529 - 9.576: 99.0326% ( 1) 00:15:59.914 9.719 - 9.766: 99.0400% ( 1) 00:15:59.914 9.861 - 9.908: 99.0475% ( 1) 00:15:59.914 10.050 - 10.098: 99.0549% ( 1) 00:15:59.914 10.287 - 10.335: 99.0624% ( 1) 00:15:59.914 10.335 - 10.382: 99.0698% ( 1) 00:15:59.914 10.856 - 10.904: 99.0847% ( 2) 00:15:59.914 11.236 - 11.283: 99.0921% ( 1) 00:15:59.914 11.283 - 11.330: 99.0996% ( 1) 00:15:59.914 11.710 - 11.757: 99.1070% ( 1) 00:15:59.914 12.136 - 12.231: 99.1145% ( 1) 00:15:59.914 12.231 - 12.326: 99.1219% ( 1) 00:15:59.914 12.705 - 12.800: 99.1293% ( 1) 00:15:59.914 12.990 - 13.084: 99.1368% ( 1) 00:15:59.914 13.084 - 13.179: 99.1442% ( 1) 00:15:59.914 13.274 - 13.369: 99.1517% ( 1) 00:15:59.914 13.369 - 13.464: 99.1591% ( 1) 00:15:59.914 13.843 - 13.938: 99.1665% ( 1) 00:15:59.914 13.938 - 14.033: 99.1814% ( 2) 00:15:59.914 14.222 - 14.317: 99.1889% ( 1) 00:15:59.914 14.412 - 14.507: 99.1963% ( 1) 00:15:59.914 14.696 - 14.791: 99.2038% ( 1) 00:15:59.914 16.877 - 16.972: 99.2112% ( 1) 00:15:59.914 16.972 - 17.067: 99.2186% ( 1) 00:15:59.914 17.161 - 17.256: 99.2261% ( 1) 00:15:59.914 17.256 - 17.351: 99.2558% ( 4) 00:15:59.914 17.351 - 17.446: 99.2633% ( 1) 00:15:59.914 17.446 - 17.541: 99.3079% ( 6) 00:15:59.914 17.541 - 17.636: 99.3526% ( 6) 00:15:59.914 17.636 - 17.730: 99.3675% ( 2) 00:15:59.914 17.730 - 17.825: 99.4121% ( 6) 00:15:59.914 17.825 - 17.920: 99.4568% ( 6) 00:15:59.914 17.920 - 18.015: 99.5014% ( 6) 00:15:59.914 18.015 - 18.110: 99.5461% ( 6) 00:15:59.914 18.110 - 18.204: 99.5758% ( 4) 00:15:59.914 18.204 - 18.299: 99.6205% ( 6) 00:15:59.914 18.299 - 18.394: 99.6800% ( 8) 00:15:59.914 18.394 - 18.489: 99.7023% ( 3) 00:15:59.914 18.489 - 18.584: 99.7619% ( 8) 00:15:59.914 18.584 - 18.679: 99.7842% ( 3) 00:15:59.914 18.679 - 18.773: 99.8437% ( 8) 00:15:59.914 18.773 - 18.868: 99.8586% ( 2) 00:15:59.914 18.868 - 18.963: 99.8661% ( 1) 00:15:59.914 18.963 - 19.058: 99.8809% ( 2) 00:15:59.914 19.153 - 19.247: 99.8958% ( 2) 00:15:59.914 19.627 - 19.721: 99.9033% ( 1) 00:15:59.914 22.661 - 22.756: 99.9107% ( 1) 00:15:59.914 23.230 - 23.324: 99.9181% ( 1) 00:15:59.914 3980.705 - 4004.978: 99.9926% ( 10) 00:15:59.914 4004.978 - 4029.250: 100.0000% ( 1) 00:15:59.914 00:15:59.914 Complete histogram 00:15:59.914 ================== 00:15:59.914 Range in us Cumulative Count 00:15:59.914 2.039 - 2.050: 4.8966% ( 658) 00:15:59.914 2.050 - 2.062: 12.2563% ( 989) 00:15:59.914 2.062 - 2.074: 14.2209% ( 264) 00:15:59.914 2.074 - 2.086: 46.2346% ( 4302) 00:15:59.914 2.086 - 2.098: 62.6358% ( 2204) 00:15:59.914 2.098 - 2.110: 64.8757% ( 301) 00:15:59.914 2.110 - 2.121: 68.3137% ( 462) 00:15:59.914 2.121 - 2.133: 69.7276% ( 190) 00:15:59.914 2.133 - 2.145: 72.2057% ( 333) 00:15:59.914 2.145 - 2.157: 84.0378% ( 1590) 00:15:59.914 2.157 - 2.169: 89.6562% ( 755) 00:15:59.914 2.169 - 2.181: 90.6683% ( 136) 00:15:59.914 2.181 - 2.193: 91.4719% ( 108) 00:15:59.914 2.193 - 2.204: 92.2608% ( 106) 00:15:59.914 2.204 - 2.216: 92.8710% ( 82) 00:15:59.914 2.216 - 2.228: 94.0170% ( 154) 00:15:59.914 2.228 - 2.240: 95.2151% ( 161) 00:15:59.914 2.240 - 2.252: 95.7211% ( 68) 00:15:59.914 2.252 - 2.264: 95.8327% ( 15) 00:15:59.914 2.264 - 2.276: 95.9146% ( 11) 00:15:59.914 2.276 - 2.287: 95.9815% ( 9) 00:15:59.914 2.287 - 2.299: 96.0411% ( 8) 00:15:59.914 2.299 - 2.311: 96.1974% ( 21) 00:15:59.914 2.311 - 2.323: 96.3239% ( 17) 00:15:59.914 2.323 - 2.335: 96.3759% ( 7) 00:15:59.914 2.335 - 2.347: 96.4578% ( 11) 00:15:59.914 2.347 - 2.359: 96.5918% ( 18) 00:15:59.914 2.359 - 2.370: 96.8299% ( 32) 00:15:59.914 2.370 - 2.382: 97.0755% ( 33) 00:15:59.914 2.382 - 2.394: 97.3731% ( 40) 00:15:59.914 2.394 - 2.406: 97.7526% ( 51) 00:15:59.914 2.406 - 2.418: 97.9610% ( 28) 00:15:59.914 2.418 - 2.430: 98.1470% ( 25) 00:15:59.914 2.430 - 2.441: 98.2661% ( 16) 00:15:59.914 2.441 - 2.453: 98.3554% ( 12) 00:15:59.914 2.453 - 2.465: 98.4373% ( 11) 00:15:59.914 2.465 - 2.477: 98.4894% ( 7) 00:15:59.914 2.477 - 2.489: 98.5042% ( 2) 00:15:59.914 2.501 - 2.513: 98.5117% ( 1) 00:15:59.914 2.524 - 2.536: 98.5266% ( 2) 00:15:59.914 2.536 - 2.548: 98.5489% ( 3) 00:15:59.914 2.596 - 2.607: 98.5787% ( 4) 00:15:59.914 2.607 - 2.619: 98.5861% ( 1) 00:15:59.914 2.619 - 2.631: 98.5935% ( 1) 00:15:59.914 2.667 - 2.679: 98.6010% ( 1) 00:15:59.914 2.679 - 2.690: 98.6084% ( 1) 00:15:59.914 2.738 - 2.750: 98.6159% ( 1) 00:15:59.914 2.868 - 2.880: 98.6233% ( 1) 00:15:59.914 2.951 - 2.963: 98.6307% ( 1) 00:15:59.914 3.224 - 3.247: 98.6382% ( 1) 00:15:59.914 3.319 - 3.342: 98.6456% ( 1) 00:15:59.914 3.366 - 3.390: 98.6605% ( 2) 00:15:59.914 3.390 - 3.413: 98.6754% ( 2) 00:15:59.914 3.413 - 3.437: 98.6903% ( 2) 00:15:59.914 3.461 - 3.484: 98.6977% ( 1) 00:15:59.914 3.484 - 3.508: 98.7052% ( 1) 00:15:59.914 3.556 - 3.579: 98.7200% ( 2) 00:15:59.914 3.579 - 3.603: 98.7275% ( 1) 00:15:59.914 3.603 - 3.627: 98.7424% ( 2) 00:15:59.914 3.627 - 3.650: 98.7498% ( 1) 00:15:59.914 3.745 - 3.769: 98.7573% ( 1) 00:15:59.914 3.816 - 3.840: 98.7647% ( 1) 00:15:59.914 3.840 - 3.864: 98.7721% ( 1) 00:15:59.914 3.911 - 3.935: 98.7796% ( 1) 00:15:59.914 4.053 - 4.077: 98.7870% ( 1) 00:15:59.914 4.124 - 4.148: 98.7945% ( 1) 00:15:59.914 5.167 - 5.191: 98.8019% ( 1) 00:15:59.914 5.215 - 5.239: 98.8093% ( 1) 00:15:59.914 5.286 - 5.310: 98.8168% ( 1) 00:15:59.914 5.973 - 5.997: 98.8242% ( 1) 00:15:59.914 6.210 - 6.258: 98.8391% ( 2) 00:15:59.914 6.305 - 6.353: 98.8466% ( 1) 00:15:59.914 7.064 - 7.111: 98.8540% ( 1) 00:15:59.914 7.111 - 7.159: 98.8614% ( 1) 00:15:59.914 7.206 - 7.253: 98.8689% ( 1) 00:15:59.914 7.253 - 7.301: 98.8763% ( 1) 00:15:59.914 7.301 - 7.348: 98.8838% ( 1) 00:15:59.914 7.443 - 7.490: 98.8912% ( 1) 00:15:59.914 7.490 - 7.538: 98.8986% ( 1) 00:15:59.914 7.680 - 7.727: 98.9061% ( 1) 00:15:59.914 8.201 - 8.249: 98.9135% ( 1) 00:15:59.914 8.818 - 8.865: 98.9210% ( 1) 00:15:59.914 10.240 - 10.287: 98.9284% ( 1) 00:15:59.914 15.360 - 15.455: 98.9359% ( 1) 00:15:59.914 15.550 - 15.644: 98.9507% ( 2) 00:15:59.914 15.644 - 15.739: 98.9656% ( 2) 00:15:59.914 15.739 - 15.834: 98.9731% ( 1) 00:15:59.914 15.834 - 15.929: 98.9954% ( 3) 00:15:59.914 15.929 - 16.024: 9[2024-04-24 05:11:37.124056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.914 9.0103% ( 2) 00:15:59.914 16.024 - 16.119: 99.0326% ( 3) 00:15:59.914 16.119 - 16.213: 99.0475% ( 2) 00:15:59.914 16.213 - 16.308: 99.0624% ( 2) 00:15:59.914 16.308 - 16.403: 99.0847% ( 3) 00:15:59.914 16.403 - 16.498: 99.0996% ( 2) 00:15:59.914 16.498 - 16.593: 99.1219% ( 3) 00:15:59.914 16.593 - 16.687: 99.1442% ( 3) 00:15:59.914 16.687 - 16.782: 99.1740% ( 4) 00:15:59.914 16.782 - 16.877: 99.2186% ( 6) 00:15:59.914 16.877 - 16.972: 99.2410% ( 3) 00:15:59.914 16.972 - 17.067: 99.2856% ( 6) 00:15:59.914 17.067 - 17.161: 99.3005% ( 2) 00:15:59.914 17.161 - 17.256: 99.3079% ( 1) 00:15:59.914 17.256 - 17.351: 99.3303% ( 3) 00:15:59.914 17.446 - 17.541: 99.3377% ( 1) 00:15:59.914 17.825 - 17.920: 99.3451% ( 1) 00:15:59.914 17.920 - 18.015: 99.3526% ( 1) 00:15:59.914 18.015 - 18.110: 99.3600% ( 1) 00:15:59.914 18.204 - 18.299: 99.3675% ( 1) 00:15:59.914 18.394 - 18.489: 99.3749% ( 1) 00:15:59.914 18.489 - 18.584: 99.3898% ( 2) 00:15:59.914 18.679 - 18.773: 99.3972% ( 1) 00:15:59.914 18.773 - 18.868: 99.4047% ( 1) 00:15:59.915 25.410 - 25.600: 99.4121% ( 1) 00:15:59.915 1577.719 - 1589.855: 99.4196% ( 1) 00:15:59.915 3980.705 - 4004.978: 99.8288% ( 55) 00:15:59.915 4004.978 - 4029.250: 99.9926% ( 22) 00:15:59.915 5995.330 - 6019.603: 100.0000% ( 1) 00:15:59.915 00:15:59.915 05:11:37 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:59.915 05:11:37 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:59.915 05:11:37 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:59.915 05:11:37 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:59.915 05:11:37 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.174 [2024-04-24 05:11:37.402520] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:00.174 [ 00:16:00.174 { 00:16:00.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.174 "subtype": "Discovery", 00:16:00.174 "listen_addresses": [], 00:16:00.174 "allow_any_host": true, 00:16:00.174 "hosts": [] 00:16:00.174 }, 00:16:00.174 { 00:16:00.174 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.174 "subtype": "NVMe", 00:16:00.174 "listen_addresses": [ 00:16:00.174 { 00:16:00.174 "transport": "VFIOUSER", 00:16:00.174 "trtype": "VFIOUSER", 00:16:00.174 "adrfam": "IPv4", 00:16:00.174 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.174 "trsvcid": "0" 00:16:00.174 } 00:16:00.174 ], 00:16:00.174 "allow_any_host": true, 00:16:00.174 "hosts": [], 00:16:00.174 "serial_number": "SPDK1", 00:16:00.174 "model_number": "SPDK bdev Controller", 00:16:00.174 "max_namespaces": 32, 00:16:00.174 "min_cntlid": 1, 00:16:00.174 "max_cntlid": 65519, 00:16:00.174 "namespaces": [ 00:16:00.174 { 00:16:00.174 "nsid": 1, 00:16:00.174 "bdev_name": "Malloc1", 00:16:00.174 "name": "Malloc1", 00:16:00.174 "nguid": "5F5F9CD7F7214282BA85FF30F3F28AC6", 00:16:00.174 "uuid": "5f5f9cd7-f721-4282-ba85-ff30f3f28ac6" 00:16:00.174 } 00:16:00.174 ] 00:16:00.174 }, 00:16:00.174 { 00:16:00.174 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.174 "subtype": "NVMe", 00:16:00.174 "listen_addresses": [ 00:16:00.174 { 00:16:00.174 "transport": "VFIOUSER", 00:16:00.174 "trtype": "VFIOUSER", 00:16:00.174 "adrfam": "IPv4", 00:16:00.174 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.174 "trsvcid": "0" 00:16:00.174 } 00:16:00.174 ], 00:16:00.174 "allow_any_host": true, 00:16:00.174 "hosts": [], 00:16:00.174 "serial_number": "SPDK2", 00:16:00.174 "model_number": "SPDK bdev Controller", 00:16:00.174 "max_namespaces": 32, 00:16:00.174 "min_cntlid": 1, 00:16:00.174 "max_cntlid": 65519, 00:16:00.174 "namespaces": [ 00:16:00.174 { 00:16:00.174 "nsid": 1, 00:16:00.174 "bdev_name": "Malloc2", 00:16:00.174 "name": "Malloc2", 00:16:00.174 "nguid": "50618D0A3DF34C5BAC6DE3409BD90D5B", 00:16:00.174 "uuid": "50618d0a-3df3-4c5b-ac6d-e3409bd90d5b" 00:16:00.174 } 00:16:00.174 ] 00:16:00.174 } 00:16:00.174 ] 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1859292 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:00.174 05:11:37 -- common/autotest_common.sh@1251 -- # local i=0 00:16:00.174 05:11:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.174 05:11:37 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.174 05:11:37 -- common/autotest_common.sh@1262 -- # return 0 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:00.174 05:11:37 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:00.433 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.433 [2024-04-24 05:11:37.581104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.433 Malloc3 00:16:00.433 05:11:37 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:00.690 [2024-04-24 05:11:37.927676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:00.690 05:11:37 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.948 Asynchronous Event Request test 00:16:00.948 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.948 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.948 Registering asynchronous event callbacks... 00:16:00.948 Starting namespace attribute notice tests for all controllers... 00:16:00.948 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:00.948 aer_cb - Changed Namespace 00:16:00.948 Cleaning up... 00:16:00.948 [ 00:16:00.948 { 00:16:00.948 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.948 "subtype": "Discovery", 00:16:00.948 "listen_addresses": [], 00:16:00.948 "allow_any_host": true, 00:16:00.948 "hosts": [] 00:16:00.948 }, 00:16:00.948 { 00:16:00.948 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.948 "subtype": "NVMe", 00:16:00.948 "listen_addresses": [ 00:16:00.948 { 00:16:00.948 "transport": "VFIOUSER", 00:16:00.948 "trtype": "VFIOUSER", 00:16:00.948 "adrfam": "IPv4", 00:16:00.948 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.948 "trsvcid": "0" 00:16:00.948 } 00:16:00.948 ], 00:16:00.948 "allow_any_host": true, 00:16:00.948 "hosts": [], 00:16:00.948 "serial_number": "SPDK1", 00:16:00.948 "model_number": "SPDK bdev Controller", 00:16:00.948 "max_namespaces": 32, 00:16:00.948 "min_cntlid": 1, 00:16:00.948 "max_cntlid": 65519, 00:16:00.948 "namespaces": [ 00:16:00.948 { 00:16:00.948 "nsid": 1, 00:16:00.948 "bdev_name": "Malloc1", 00:16:00.948 "name": "Malloc1", 00:16:00.948 "nguid": "5F5F9CD7F7214282BA85FF30F3F28AC6", 00:16:00.948 "uuid": "5f5f9cd7-f721-4282-ba85-ff30f3f28ac6" 00:16:00.948 }, 00:16:00.948 { 00:16:00.948 "nsid": 2, 00:16:00.948 "bdev_name": "Malloc3", 00:16:00.948 "name": "Malloc3", 00:16:00.948 "nguid": "47AC66A23657457BA731FE2ADB81BCF4", 00:16:00.948 "uuid": "47ac66a2-3657-457b-a731-fe2adb81bcf4" 00:16:00.948 } 00:16:00.948 ] 00:16:00.948 }, 00:16:00.948 { 00:16:00.948 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.948 "subtype": "NVMe", 00:16:00.948 "listen_addresses": [ 00:16:00.948 { 00:16:00.948 "transport": "VFIOUSER", 00:16:00.948 "trtype": "VFIOUSER", 00:16:00.948 "adrfam": "IPv4", 00:16:00.948 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.949 "trsvcid": "0" 00:16:00.949 } 00:16:00.949 ], 00:16:00.949 "allow_any_host": true, 00:16:00.949 "hosts": [], 00:16:00.949 "serial_number": "SPDK2", 00:16:00.949 "model_number": "SPDK bdev Controller", 00:16:00.949 "max_namespaces": 32, 00:16:00.949 "min_cntlid": 1, 00:16:00.949 "max_cntlid": 65519, 00:16:00.949 "namespaces": [ 00:16:00.949 { 00:16:00.949 "nsid": 1, 00:16:00.949 "bdev_name": "Malloc2", 00:16:00.949 "name": "Malloc2", 00:16:00.949 "nguid": "50618D0A3DF34C5BAC6DE3409BD90D5B", 00:16:00.949 "uuid": "50618d0a-3df3-4c5b-ac6d-e3409bd90d5b" 00:16:00.949 } 00:16:00.949 ] 00:16:00.949 } 00:16:00.949 ] 00:16:00.949 05:11:38 -- target/nvmf_vfio_user.sh@44 -- # wait 1859292 00:16:00.949 05:11:38 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.949 05:11:38 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:00.949 05:11:38 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:00.949 05:11:38 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:00.949 [2024-04-24 05:11:38.203804] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:16:00.949 [2024-04-24 05:11:38.203846] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859379 ] 00:16:00.949 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.211 [2024-04-24 05:11:38.222197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.211 [2024-04-24 05:11:38.239727] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:01.211 [2024-04-24 05:11:38.242042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.211 [2024-04-24 05:11:38.242071] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4f3e961000 00:16:01.211 [2024-04-24 05:11:38.243041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.244051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.245052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.246055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.247059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.248068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.249067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.250076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.212 [2024-04-24 05:11:38.251082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.212 [2024-04-24 05:11:38.251103] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4f3d712000 00:16:01.212 [2024-04-24 05:11:38.252238] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.212 [2024-04-24 05:11:38.266264] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:01.212 [2024-04-24 05:11:38.266299] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:01.212 [2024-04-24 05:11:38.271411] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.212 [2024-04-24 05:11:38.271463] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:01.212 [2024-04-24 05:11:38.271552] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:01.212 [2024-04-24 05:11:38.271578] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:01.212 [2024-04-24 05:11:38.271592] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:01.212 [2024-04-24 05:11:38.272417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:01.212 [2024-04-24 05:11:38.272438] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:01.212 [2024-04-24 05:11:38.272450] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:01.212 [2024-04-24 05:11:38.273425] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.212 [2024-04-24 05:11:38.273445] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:01.212 [2024-04-24 05:11:38.273459] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.274429] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:01.212 [2024-04-24 05:11:38.274449] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.275438] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:01.212 [2024-04-24 05:11:38.275458] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:01.212 [2024-04-24 05:11:38.275467] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.275479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.275588] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:01.212 [2024-04-24 05:11:38.275596] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.275604] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:01.212 [2024-04-24 05:11:38.276450] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:01.212 [2024-04-24 05:11:38.277456] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:01.212 [2024-04-24 05:11:38.278459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.212 [2024-04-24 05:11:38.279457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.212 [2024-04-24 05:11:38.279520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:01.212 [2024-04-24 05:11:38.280473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:01.212 [2024-04-24 05:11:38.280492] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:01.212 [2024-04-24 05:11:38.280502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.280525] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:01.212 [2024-04-24 05:11:38.280543] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.280568] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.212 [2024-04-24 05:11:38.280577] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.212 [2024-04-24 05:11:38.280598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.212 [2024-04-24 05:11:38.288655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:01.212 [2024-04-24 05:11:38.288678] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:01.212 [2024-04-24 05:11:38.288687] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:01.212 [2024-04-24 05:11:38.288695] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:01.212 [2024-04-24 05:11:38.288703] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:01.212 [2024-04-24 05:11:38.288711] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:01.212 [2024-04-24 05:11:38.288718] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:01.212 [2024-04-24 05:11:38.288726] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.288740] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.288756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:01.212 [2024-04-24 05:11:38.296642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:01.212 [2024-04-24 05:11:38.296683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.212 [2024-04-24 05:11:38.296697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.212 [2024-04-24 05:11:38.296709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.212 [2024-04-24 05:11:38.296721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.212 [2024-04-24 05:11:38.296730] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.296745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.296759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:01.212 [2024-04-24 05:11:38.304656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:01.212 [2024-04-24 05:11:38.304681] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:01.212 [2024-04-24 05:11:38.304691] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.304710] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.304722] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.304736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.212 [2024-04-24 05:11:38.312655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:01.212 [2024-04-24 05:11:38.312716] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.312731] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.312744] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:01.212 [2024-04-24 05:11:38.312752] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:01.212 [2024-04-24 05:11:38.312762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:01.212 [2024-04-24 05:11:38.320641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:01.212 [2024-04-24 05:11:38.320662] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:01.212 [2024-04-24 05:11:38.320682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:01.212 [2024-04-24 05:11:38.320696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.320708] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.213 [2024-04-24 05:11:38.320716] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.213 [2024-04-24 05:11:38.320726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.328668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.328684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.328697] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.213 [2024-04-24 05:11:38.328705] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.213 [2024-04-24 05:11:38.328715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.336642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.336662] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336675] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336690] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336703] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336712] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336721] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:01.213 [2024-04-24 05:11:38.336729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:01.213 [2024-04-24 05:11:38.336737] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:01.213 [2024-04-24 05:11:38.336762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.344639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.344665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.351821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.351847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.359638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.359663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.367656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.367690] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:01.213 [2024-04-24 05:11:38.367700] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:01.213 [2024-04-24 05:11:38.367707] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:01.213 [2024-04-24 05:11:38.367713] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:01.213 [2024-04-24 05:11:38.367723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:01.213 [2024-04-24 05:11:38.367734] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:01.213 [2024-04-24 05:11:38.367743] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:01.213 [2024-04-24 05:11:38.367752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.367763] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:01.213 [2024-04-24 05:11:38.367771] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.213 [2024-04-24 05:11:38.367780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.367792] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:01.213 [2024-04-24 05:11:38.367799] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:01.213 [2024-04-24 05:11:38.367808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:01.213 [2024-04-24 05:11:38.375639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.375668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.375685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:01.213 [2024-04-24 05:11:38.375697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:01.213 ===================================================== 00:16:01.213 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.213 ===================================================== 00:16:01.213 Controller Capabilities/Features 00:16:01.213 ================================ 00:16:01.213 Vendor ID: 4e58 00:16:01.213 Subsystem Vendor ID: 4e58 00:16:01.213 Serial Number: SPDK2 00:16:01.213 Model Number: SPDK bdev Controller 00:16:01.213 Firmware Version: 24.05 00:16:01.213 Recommended Arb Burst: 6 00:16:01.213 IEEE OUI Identifier: 8d 6b 50 00:16:01.213 Multi-path I/O 00:16:01.213 May have multiple subsystem ports: Yes 00:16:01.213 May have multiple controllers: Yes 00:16:01.213 Associated with SR-IOV VF: No 00:16:01.213 Max Data Transfer Size: 131072 00:16:01.213 Max Number of Namespaces: 32 00:16:01.213 Max Number of I/O Queues: 127 00:16:01.213 NVMe Specification Version (VS): 1.3 00:16:01.213 NVMe Specification Version (Identify): 1.3 00:16:01.213 Maximum Queue Entries: 256 00:16:01.213 Contiguous Queues Required: Yes 00:16:01.213 Arbitration Mechanisms Supported 00:16:01.213 Weighted Round Robin: Not Supported 00:16:01.213 Vendor Specific: Not Supported 00:16:01.213 Reset Timeout: 15000 ms 00:16:01.213 Doorbell Stride: 4 bytes 00:16:01.213 NVM Subsystem Reset: Not Supported 00:16:01.213 Command Sets Supported 00:16:01.213 NVM Command Set: Supported 00:16:01.213 Boot Partition: Not Supported 00:16:01.213 Memory Page Size Minimum: 4096 bytes 00:16:01.213 Memory Page Size Maximum: 4096 bytes 00:16:01.213 Persistent Memory Region: Not Supported 00:16:01.213 Optional Asynchronous Events Supported 00:16:01.213 Namespace Attribute Notices: Supported 00:16:01.213 Firmware Activation Notices: Not Supported 00:16:01.213 ANA Change Notices: Not Supported 00:16:01.213 PLE Aggregate Log Change Notices: Not Supported 00:16:01.213 LBA Status Info Alert Notices: Not Supported 00:16:01.213 EGE Aggregate Log Change Notices: Not Supported 00:16:01.213 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.213 Zone Descriptor Change Notices: Not Supported 00:16:01.213 Discovery Log Change Notices: Not Supported 00:16:01.213 Controller Attributes 00:16:01.213 128-bit Host Identifier: Supported 00:16:01.213 Non-Operational Permissive Mode: Not Supported 00:16:01.213 NVM Sets: Not Supported 00:16:01.213 Read Recovery Levels: Not Supported 00:16:01.213 Endurance Groups: Not Supported 00:16:01.213 Predictable Latency Mode: Not Supported 00:16:01.213 Traffic Based Keep ALive: Not Supported 00:16:01.213 Namespace Granularity: Not Supported 00:16:01.213 SQ Associations: Not Supported 00:16:01.213 UUID List: Not Supported 00:16:01.213 Multi-Domain Subsystem: Not Supported 00:16:01.213 Fixed Capacity Management: Not Supported 00:16:01.213 Variable Capacity Management: Not Supported 00:16:01.213 Delete Endurance Group: Not Supported 00:16:01.213 Delete NVM Set: Not Supported 00:16:01.213 Extended LBA Formats Supported: Not Supported 00:16:01.213 Flexible Data Placement Supported: Not Supported 00:16:01.213 00:16:01.213 Controller Memory Buffer Support 00:16:01.213 ================================ 00:16:01.213 Supported: No 00:16:01.213 00:16:01.213 Persistent Memory Region Support 00:16:01.213 ================================ 00:16:01.213 Supported: No 00:16:01.213 00:16:01.213 Admin Command Set Attributes 00:16:01.213 ============================ 00:16:01.213 Security Send/Receive: Not Supported 00:16:01.213 Format NVM: Not Supported 00:16:01.213 Firmware Activate/Download: Not Supported 00:16:01.213 Namespace Management: Not Supported 00:16:01.213 Device Self-Test: Not Supported 00:16:01.213 Directives: Not Supported 00:16:01.213 NVMe-MI: Not Supported 00:16:01.213 Virtualization Management: Not Supported 00:16:01.213 Doorbell Buffer Config: Not Supported 00:16:01.213 Get LBA Status Capability: Not Supported 00:16:01.213 Command & Feature Lockdown Capability: Not Supported 00:16:01.213 Abort Command Limit: 4 00:16:01.213 Async Event Request Limit: 4 00:16:01.213 Number of Firmware Slots: N/A 00:16:01.213 Firmware Slot 1 Read-Only: N/A 00:16:01.214 Firmware Activation Without Reset: N/A 00:16:01.214 Multiple Update Detection Support: N/A 00:16:01.214 Firmware Update Granularity: No Information Provided 00:16:01.214 Per-Namespace SMART Log: No 00:16:01.214 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.214 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:01.214 Command Effects Log Page: Supported 00:16:01.214 Get Log Page Extended Data: Supported 00:16:01.214 Telemetry Log Pages: Not Supported 00:16:01.214 Persistent Event Log Pages: Not Supported 00:16:01.214 Supported Log Pages Log Page: May Support 00:16:01.214 Commands Supported & Effects Log Page: Not Supported 00:16:01.214 Feature Identifiers & Effects Log Page:May Support 00:16:01.214 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.214 Data Area 4 for Telemetry Log: Not Supported 00:16:01.214 Error Log Page Entries Supported: 128 00:16:01.214 Keep Alive: Supported 00:16:01.214 Keep Alive Granularity: 10000 ms 00:16:01.214 00:16:01.214 NVM Command Set Attributes 00:16:01.214 ========================== 00:16:01.214 Submission Queue Entry Size 00:16:01.214 Max: 64 00:16:01.214 Min: 64 00:16:01.214 Completion Queue Entry Size 00:16:01.214 Max: 16 00:16:01.214 Min: 16 00:16:01.214 Number of Namespaces: 32 00:16:01.214 Compare Command: Supported 00:16:01.214 Write Uncorrectable Command: Not Supported 00:16:01.214 Dataset Management Command: Supported 00:16:01.214 Write Zeroes Command: Supported 00:16:01.214 Set Features Save Field: Not Supported 00:16:01.214 Reservations: Not Supported 00:16:01.214 Timestamp: Not Supported 00:16:01.214 Copy: Supported 00:16:01.214 Volatile Write Cache: Present 00:16:01.214 Atomic Write Unit (Normal): 1 00:16:01.214 Atomic Write Unit (PFail): 1 00:16:01.214 Atomic Compare & Write Unit: 1 00:16:01.214 Fused Compare & Write: Supported 00:16:01.214 Scatter-Gather List 00:16:01.214 SGL Command Set: Supported (Dword aligned) 00:16:01.214 SGL Keyed: Not Supported 00:16:01.214 SGL Bit Bucket Descriptor: Not Supported 00:16:01.214 SGL Metadata Pointer: Not Supported 00:16:01.214 Oversized SGL: Not Supported 00:16:01.214 SGL Metadata Address: Not Supported 00:16:01.214 SGL Offset: Not Supported 00:16:01.214 Transport SGL Data Block: Not Supported 00:16:01.214 Replay Protected Memory Block: Not Supported 00:16:01.214 00:16:01.214 Firmware Slot Information 00:16:01.214 ========================= 00:16:01.214 Active slot: 1 00:16:01.214 Slot 1 Firmware Revision: 24.05 00:16:01.214 00:16:01.214 00:16:01.214 Commands Supported and Effects 00:16:01.214 ============================== 00:16:01.214 Admin Commands 00:16:01.214 -------------- 00:16:01.214 Get Log Page (02h): Supported 00:16:01.214 Identify (06h): Supported 00:16:01.214 Abort (08h): Supported 00:16:01.214 Set Features (09h): Supported 00:16:01.214 Get Features (0Ah): Supported 00:16:01.214 Asynchronous Event Request (0Ch): Supported 00:16:01.214 Keep Alive (18h): Supported 00:16:01.214 I/O Commands 00:16:01.214 ------------ 00:16:01.214 Flush (00h): Supported LBA-Change 00:16:01.214 Write (01h): Supported LBA-Change 00:16:01.214 Read (02h): Supported 00:16:01.214 Compare (05h): Supported 00:16:01.214 Write Zeroes (08h): Supported LBA-Change 00:16:01.214 Dataset Management (09h): Supported LBA-Change 00:16:01.214 Copy (19h): Supported LBA-Change 00:16:01.214 Unknown (79h): Supported LBA-Change 00:16:01.214 Unknown (7Ah): Supported 00:16:01.214 00:16:01.214 Error Log 00:16:01.214 ========= 00:16:01.214 00:16:01.214 Arbitration 00:16:01.214 =========== 00:16:01.214 Arbitration Burst: 1 00:16:01.214 00:16:01.214 Power Management 00:16:01.214 ================ 00:16:01.214 Number of Power States: 1 00:16:01.214 Current Power State: Power State #0 00:16:01.214 Power State #0: 00:16:01.214 Max Power: 0.00 W 00:16:01.214 Non-Operational State: Operational 00:16:01.214 Entry Latency: Not Reported 00:16:01.214 Exit Latency: Not Reported 00:16:01.214 Relative Read Throughput: 0 00:16:01.214 Relative Read Latency: 0 00:16:01.214 Relative Write Throughput: 0 00:16:01.214 Relative Write Latency: 0 00:16:01.214 Idle Power: Not Reported 00:16:01.214 Active Power: Not Reported 00:16:01.214 Non-Operational Permissive Mode: Not Supported 00:16:01.214 00:16:01.214 Health Information 00:16:01.214 ================== 00:16:01.214 Critical Warnings: 00:16:01.214 Available Spare Space: OK 00:16:01.214 Temperature: OK 00:16:01.214 Device Reliability: OK 00:16:01.214 Read Only: No 00:16:01.214 Volatile Memory Backup: OK 00:16:01.214 Current Temperature: 0 Kelvin (-2[2024-04-24 05:11:38.375814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:01.214 [2024-04-24 05:11:38.383641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:01.214 [2024-04-24 05:11:38.383695] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:01.214 [2024-04-24 05:11:38.383712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.214 [2024-04-24 05:11:38.383724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.214 [2024-04-24 05:11:38.383734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.214 [2024-04-24 05:11:38.383744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.214 [2024-04-24 05:11:38.383832] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.214 [2024-04-24 05:11:38.383853] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:01.214 [2024-04-24 05:11:38.384835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.214 [2024-04-24 05:11:38.384904] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:01.214 [2024-04-24 05:11:38.384934] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:01.214 [2024-04-24 05:11:38.385839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:01.214 [2024-04-24 05:11:38.385862] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:01.214 [2024-04-24 05:11:38.385914] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:01.214 [2024-04-24 05:11:38.387100] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.214 73 Celsius) 00:16:01.214 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:01.214 Available Spare: 0% 00:16:01.214 Available Spare Threshold: 0% 00:16:01.214 Life Percentage Used: 0% 00:16:01.214 Data Units Read: 0 00:16:01.214 Data Units Written: 0 00:16:01.214 Host Read Commands: 0 00:16:01.214 Host Write Commands: 0 00:16:01.214 Controller Busy Time: 0 minutes 00:16:01.214 Power Cycles: 0 00:16:01.214 Power On Hours: 0 hours 00:16:01.214 Unsafe Shutdowns: 0 00:16:01.214 Unrecoverable Media Errors: 0 00:16:01.214 Lifetime Error Log Entries: 0 00:16:01.214 Warning Temperature Time: 0 minutes 00:16:01.214 Critical Temperature Time: 0 minutes 00:16:01.214 00:16:01.214 Number of Queues 00:16:01.214 ================ 00:16:01.214 Number of I/O Submission Queues: 127 00:16:01.214 Number of I/O Completion Queues: 127 00:16:01.214 00:16:01.214 Active Namespaces 00:16:01.214 ================= 00:16:01.214 Namespace ID:1 00:16:01.214 Error Recovery Timeout: Unlimited 00:16:01.214 Command Set Identifier: NVM (00h) 00:16:01.214 Deallocate: Supported 00:16:01.214 Deallocated/Unwritten Error: Not Supported 00:16:01.214 Deallocated Read Value: Unknown 00:16:01.214 Deallocate in Write Zeroes: Not Supported 00:16:01.214 Deallocated Guard Field: 0xFFFF 00:16:01.214 Flush: Supported 00:16:01.214 Reservation: Supported 00:16:01.214 Namespace Sharing Capabilities: Multiple Controllers 00:16:01.214 Size (in LBAs): 131072 (0GiB) 00:16:01.214 Capacity (in LBAs): 131072 (0GiB) 00:16:01.214 Utilization (in LBAs): 131072 (0GiB) 00:16:01.214 NGUID: 50618D0A3DF34C5BAC6DE3409BD90D5B 00:16:01.214 UUID: 50618d0a-3df3-4c5b-ac6d-e3409bd90d5b 00:16:01.214 Thin Provisioning: Not Supported 00:16:01.214 Per-NS Atomic Units: Yes 00:16:01.214 Atomic Boundary Size (Normal): 0 00:16:01.214 Atomic Boundary Size (PFail): 0 00:16:01.214 Atomic Boundary Offset: 0 00:16:01.214 Maximum Single Source Range Length: 65535 00:16:01.214 Maximum Copy Length: 65535 00:16:01.214 Maximum Source Range Count: 1 00:16:01.214 NGUID/EUI64 Never Reused: No 00:16:01.214 Namespace Write Protected: No 00:16:01.214 Number of LBA Formats: 1 00:16:01.214 Current LBA Format: LBA Format #00 00:16:01.214 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:01.214 00:16:01.214 05:11:38 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:01.215 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.473 [2024-04-24 05:11:38.618757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.746 [2024-04-24 05:11:43.726990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.746 Initializing NVMe Controllers 00:16:06.746 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:06.746 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:06.746 Initialization complete. Launching workers. 00:16:06.746 ======================================================== 00:16:06.746 Latency(us) 00:16:06.746 Device Information : IOPS MiB/s Average min max 00:16:06.746 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34494.80 134.75 3711.54 1166.36 7593.94 00:16:06.746 ======================================================== 00:16:06.746 Total : 34494.80 134.75 3711.54 1166.36 7593.94 00:16:06.746 00:16:06.746 05:11:43 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:06.746 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.746 [2024-04-24 05:11:43.973670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.019 [2024-04-24 05:11:48.996389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.019 Initializing NVMe Controllers 00:16:12.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.019 Initialization complete. Launching workers. 00:16:12.019 ======================================================== 00:16:12.019 Latency(us) 00:16:12.019 Device Information : IOPS MiB/s Average min max 00:16:12.019 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33735.05 131.78 3793.48 1179.00 7615.80 00:16:12.019 ======================================================== 00:16:12.019 Total : 33735.05 131.78 3793.48 1179.00 7615.80 00:16:12.019 00:16:12.019 05:11:49 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:12.019 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.019 [2024-04-24 05:11:49.197604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.322 [2024-04-24 05:11:54.349762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.322 Initializing NVMe Controllers 00:16:17.322 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.323 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:17.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:17.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:17.323 Initialization complete. Launching workers. 00:16:17.323 Starting thread on core 2 00:16:17.323 Starting thread on core 3 00:16:17.323 Starting thread on core 1 00:16:17.323 05:11:54 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:17.323 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.581 [2024-04-24 05:11:54.646851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.870 [2024-04-24 05:11:57.713516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.870 Initializing NVMe Controllers 00:16:20.870 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.870 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.870 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:20.870 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:20.870 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:20.870 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:20.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:20.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:20.870 Initialization complete. Launching workers. 00:16:20.870 Starting thread on core 1 with urgent priority queue 00:16:20.870 Starting thread on core 2 with urgent priority queue 00:16:20.870 Starting thread on core 3 with urgent priority queue 00:16:20.870 Starting thread on core 0 with urgent priority queue 00:16:20.870 SPDK bdev Controller (SPDK2 ) core 0: 5507.33 IO/s 18.16 secs/100000 ios 00:16:20.870 SPDK bdev Controller (SPDK2 ) core 1: 4770.67 IO/s 20.96 secs/100000 ios 00:16:20.870 SPDK bdev Controller (SPDK2 ) core 2: 5582.00 IO/s 17.91 secs/100000 ios 00:16:20.870 SPDK bdev Controller (SPDK2 ) core 3: 5721.33 IO/s 17.48 secs/100000 ios 00:16:20.870 ======================================================== 00:16:20.870 00:16:20.870 05:11:57 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:20.870 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.870 [2024-04-24 05:11:58.008069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.870 [2024-04-24 05:11:58.021138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.870 Initializing NVMe Controllers 00:16:20.870 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.870 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.870 Namespace ID: 1 size: 0GB 00:16:20.870 Initialization complete. 00:16:20.870 INFO: using host memory buffer for IO 00:16:20.870 Hello world! 00:16:20.870 05:11:58 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:20.870 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.130 [2024-04-24 05:11:58.300988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.508 Initializing NVMe Controllers 00:16:22.508 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.508 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.508 Initialization complete. Launching workers. 00:16:22.508 submit (in ns) avg, min, max = 9528.6, 3467.8, 4041333.3 00:16:22.508 complete (in ns) avg, min, max = 22092.1, 2042.2, 4998348.9 00:16:22.508 00:16:22.508 Submit histogram 00:16:22.508 ================ 00:16:22.508 Range in us Cumulative Count 00:16:22.508 3.461 - 3.484: 0.0074% ( 1) 00:16:22.508 3.484 - 3.508: 0.1255% ( 16) 00:16:22.508 3.508 - 3.532: 0.6868% ( 76) 00:16:22.508 3.532 - 3.556: 2.3630% ( 227) 00:16:22.508 3.556 - 3.579: 5.3980% ( 411) 00:16:22.508 3.579 - 3.603: 11.6674% ( 849) 00:16:22.508 3.603 - 3.627: 19.5023% ( 1061) 00:16:22.508 3.627 - 3.650: 29.8036% ( 1395) 00:16:22.508 3.650 - 3.674: 38.3991% ( 1164) 00:16:22.508 3.674 - 3.698: 46.3816% ( 1081) 00:16:22.508 3.698 - 3.721: 53.3156% ( 939) 00:16:22.508 3.721 - 3.745: 58.9278% ( 760) 00:16:22.508 3.745 - 3.769: 63.1369% ( 570) 00:16:22.508 3.769 - 3.793: 66.8365% ( 501) 00:16:22.508 3.793 - 3.816: 70.4549% ( 490) 00:16:22.508 3.816 - 3.840: 73.8886% ( 465) 00:16:22.508 3.840 - 3.864: 77.5809% ( 500) 00:16:22.508 3.864 - 3.887: 81.0885% ( 475) 00:16:22.508 3.887 - 3.911: 84.0570% ( 402) 00:16:22.508 3.911 - 3.935: 86.3831% ( 315) 00:16:22.508 3.935 - 3.959: 88.3104% ( 261) 00:16:22.508 3.959 - 3.982: 89.9646% ( 224) 00:16:22.508 3.982 - 4.006: 91.6187% ( 224) 00:16:22.508 4.006 - 4.030: 92.6525% ( 140) 00:16:22.508 4.030 - 4.053: 93.4795% ( 112) 00:16:22.508 4.053 - 4.077: 94.3361% ( 116) 00:16:22.508 4.077 - 4.101: 95.0155% ( 92) 00:16:22.508 4.101 - 4.124: 95.6063% ( 80) 00:16:22.508 4.124 - 4.148: 96.0863% ( 65) 00:16:22.508 4.148 - 4.172: 96.3521% ( 36) 00:16:22.508 4.172 - 4.196: 96.5810% ( 31) 00:16:22.508 4.196 - 4.219: 96.7508% ( 23) 00:16:22.508 4.219 - 4.243: 96.9133% ( 22) 00:16:22.508 4.243 - 4.267: 96.9576% ( 6) 00:16:22.508 4.267 - 4.290: 97.0684% ( 15) 00:16:22.508 4.290 - 4.314: 97.1791% ( 15) 00:16:22.508 4.314 - 4.338: 97.2530% ( 10) 00:16:22.508 4.338 - 4.361: 97.3564% ( 14) 00:16:22.508 4.361 - 4.385: 97.4671% ( 15) 00:16:22.508 4.385 - 4.409: 97.4819% ( 2) 00:16:22.508 4.409 - 4.433: 97.5041% ( 3) 00:16:22.508 4.433 - 4.456: 97.5558% ( 7) 00:16:22.508 4.456 - 4.480: 97.5705% ( 2) 00:16:22.508 4.480 - 4.504: 97.5853% ( 2) 00:16:22.508 4.527 - 4.551: 97.6001% ( 2) 00:16:22.508 4.551 - 4.575: 97.6222% ( 3) 00:16:22.508 4.575 - 4.599: 97.6370% ( 2) 00:16:22.508 4.622 - 4.646: 97.6444% ( 1) 00:16:22.509 4.717 - 4.741: 97.6591% ( 2) 00:16:22.509 4.741 - 4.764: 97.6739% ( 2) 00:16:22.509 4.764 - 4.788: 97.6887% ( 2) 00:16:22.509 4.788 - 4.812: 97.7034% ( 2) 00:16:22.509 4.812 - 4.836: 97.7330% ( 4) 00:16:22.509 4.836 - 4.859: 97.7994% ( 9) 00:16:22.509 4.859 - 4.883: 97.8142% ( 2) 00:16:22.509 4.883 - 4.907: 97.8585% ( 6) 00:16:22.509 4.907 - 4.930: 97.8881% ( 4) 00:16:22.509 4.930 - 4.954: 97.9250% ( 5) 00:16:22.509 4.954 - 4.978: 97.9693% ( 6) 00:16:22.509 4.978 - 5.001: 98.0136% ( 6) 00:16:22.509 5.001 - 5.025: 98.0579% ( 6) 00:16:22.509 5.025 - 5.049: 98.0800% ( 3) 00:16:22.509 5.049 - 5.073: 98.1465% ( 9) 00:16:22.509 5.073 - 5.096: 98.1687% ( 3) 00:16:22.509 5.096 - 5.120: 98.1760% ( 1) 00:16:22.509 5.120 - 5.144: 98.2056% ( 4) 00:16:22.509 5.144 - 5.167: 98.2425% ( 5) 00:16:22.509 5.167 - 5.191: 98.2647% ( 3) 00:16:22.509 5.191 - 5.215: 98.2868% ( 3) 00:16:22.509 5.215 - 5.239: 98.3016% ( 2) 00:16:22.509 5.239 - 5.262: 98.3237% ( 3) 00:16:22.509 5.262 - 5.286: 98.3311% ( 1) 00:16:22.509 5.286 - 5.310: 98.3385% ( 1) 00:16:22.509 5.310 - 5.333: 98.3533% ( 2) 00:16:22.509 5.333 - 5.357: 98.3680% ( 2) 00:16:22.509 5.404 - 5.428: 98.3754% ( 1) 00:16:22.509 5.428 - 5.452: 98.3828% ( 1) 00:16:22.509 5.523 - 5.547: 98.3902% ( 1) 00:16:22.509 5.547 - 5.570: 98.3976% ( 1) 00:16:22.509 5.665 - 5.689: 98.4050% ( 1) 00:16:22.509 5.736 - 5.760: 98.4123% ( 1) 00:16:22.509 5.760 - 5.784: 98.4197% ( 1) 00:16:22.509 6.044 - 6.068: 98.4271% ( 1) 00:16:22.509 6.542 - 6.590: 98.4345% ( 1) 00:16:22.509 6.684 - 6.732: 98.4419% ( 1) 00:16:22.509 6.732 - 6.779: 98.4493% ( 1) 00:16:22.509 7.016 - 7.064: 98.4567% ( 1) 00:16:22.509 7.111 - 7.159: 98.4640% ( 1) 00:16:22.509 7.159 - 7.206: 98.4714% ( 1) 00:16:22.509 7.253 - 7.301: 98.4862% ( 2) 00:16:22.509 7.301 - 7.348: 98.4936% ( 1) 00:16:22.509 7.348 - 7.396: 98.5157% ( 3) 00:16:22.509 7.443 - 7.490: 98.5231% ( 1) 00:16:22.509 7.490 - 7.538: 98.5305% ( 1) 00:16:22.509 7.680 - 7.727: 98.5453% ( 2) 00:16:22.509 7.727 - 7.775: 98.5600% ( 2) 00:16:22.509 7.775 - 7.822: 98.5822% ( 3) 00:16:22.509 7.822 - 7.870: 98.5896% ( 1) 00:16:22.509 7.870 - 7.917: 98.6043% ( 2) 00:16:22.509 7.917 - 7.964: 98.6117% ( 1) 00:16:22.509 7.964 - 8.012: 98.6265% ( 2) 00:16:22.509 8.012 - 8.059: 98.6413% ( 2) 00:16:22.509 8.059 - 8.107: 98.6486% ( 1) 00:16:22.509 8.107 - 8.154: 98.6634% ( 2) 00:16:22.509 8.296 - 8.344: 98.6708% ( 1) 00:16:22.509 8.344 - 8.391: 98.6782% ( 1) 00:16:22.509 8.391 - 8.439: 98.6856% ( 1) 00:16:22.509 8.439 - 8.486: 98.6930% ( 1) 00:16:22.509 8.628 - 8.676: 98.7003% ( 1) 00:16:22.509 8.865 - 8.913: 98.7151% ( 2) 00:16:22.509 8.913 - 8.960: 98.7225% ( 1) 00:16:22.509 9.007 - 9.055: 98.7299% ( 1) 00:16:22.509 9.055 - 9.102: 98.7446% ( 2) 00:16:22.509 9.197 - 9.244: 98.7520% ( 1) 00:16:22.509 9.292 - 9.339: 98.7594% ( 1) 00:16:22.509 9.387 - 9.434: 98.7668% ( 1) 00:16:22.509 9.529 - 9.576: 98.7742% ( 1) 00:16:22.509 9.624 - 9.671: 98.7816% ( 1) 00:16:22.509 9.671 - 9.719: 98.7890% ( 1) 00:16:22.509 9.719 - 9.766: 98.8037% ( 2) 00:16:22.509 9.766 - 9.813: 98.8185% ( 2) 00:16:22.509 9.813 - 9.861: 98.8333% ( 2) 00:16:22.509 9.908 - 9.956: 98.8406% ( 1) 00:16:22.509 10.050 - 10.098: 98.8554% ( 2) 00:16:22.509 10.145 - 10.193: 98.8628% ( 1) 00:16:22.509 10.335 - 10.382: 98.8702% ( 1) 00:16:22.509 10.667 - 10.714: 98.8776% ( 1) 00:16:22.509 10.761 - 10.809: 98.8850% ( 1) 00:16:22.509 10.809 - 10.856: 98.8997% ( 2) 00:16:22.509 10.904 - 10.951: 98.9071% ( 1) 00:16:22.509 11.046 - 11.093: 98.9145% ( 1) 00:16:22.509 11.093 - 11.141: 98.9366% ( 3) 00:16:22.509 11.141 - 11.188: 98.9514% ( 2) 00:16:22.509 11.330 - 11.378: 98.9588% ( 1) 00:16:22.509 11.378 - 11.425: 98.9662% ( 1) 00:16:22.509 11.425 - 11.473: 98.9736% ( 1) 00:16:22.509 11.710 - 11.757: 98.9809% ( 1) 00:16:22.509 11.852 - 11.899: 98.9883% ( 1) 00:16:22.509 12.089 - 12.136: 98.9957% ( 1) 00:16:22.509 12.136 - 12.231: 99.0031% ( 1) 00:16:22.509 12.326 - 12.421: 99.0105% ( 1) 00:16:22.509 12.516 - 12.610: 99.0179% ( 1) 00:16:22.509 12.610 - 12.705: 99.0326% ( 2) 00:16:22.509 12.800 - 12.895: 99.0400% ( 1) 00:16:22.509 12.895 - 12.990: 99.0474% ( 1) 00:16:22.509 12.990 - 13.084: 99.0548% ( 1) 00:16:22.509 13.179 - 13.274: 99.0622% ( 1) 00:16:22.509 13.274 - 13.369: 99.0696% ( 1) 00:16:22.509 13.464 - 13.559: 99.0769% ( 1) 00:16:22.509 13.938 - 14.033: 99.0843% ( 1) 00:16:22.509 15.170 - 15.265: 99.0917% ( 1) 00:16:22.509 17.067 - 17.161: 99.0991% ( 1) 00:16:22.509 17.161 - 17.256: 99.1139% ( 2) 00:16:22.509 17.256 - 17.351: 99.1360% ( 3) 00:16:22.509 17.446 - 17.541: 99.1508% ( 2) 00:16:22.509 17.541 - 17.636: 99.1803% ( 4) 00:16:22.509 17.636 - 17.730: 99.1951% ( 2) 00:16:22.509 17.730 - 17.825: 99.2246% ( 4) 00:16:22.509 17.825 - 17.920: 99.2911% ( 9) 00:16:22.509 17.920 - 18.015: 99.3206% ( 4) 00:16:22.509 18.015 - 18.110: 99.3576% ( 5) 00:16:22.509 18.110 - 18.204: 99.4388% ( 11) 00:16:22.509 18.204 - 18.299: 99.5052% ( 9) 00:16:22.509 18.299 - 18.394: 99.5569% ( 7) 00:16:22.509 18.394 - 18.489: 99.5939% ( 5) 00:16:22.509 18.489 - 18.584: 99.6529% ( 8) 00:16:22.509 18.584 - 18.679: 99.6825% ( 4) 00:16:22.509 18.679 - 18.773: 99.7120% ( 4) 00:16:22.509 18.773 - 18.868: 99.7268% ( 2) 00:16:22.509 18.868 - 18.963: 99.7489% ( 3) 00:16:22.509 18.963 - 19.058: 99.7637% ( 2) 00:16:22.509 19.058 - 19.153: 99.7785% ( 2) 00:16:22.509 19.153 - 19.247: 99.7859% ( 1) 00:16:22.509 19.627 - 19.721: 99.8080% ( 3) 00:16:22.509 19.721 - 19.816: 99.8154% ( 1) 00:16:22.509 20.006 - 20.101: 99.8228% ( 1) 00:16:22.509 21.618 - 21.713: 99.8375% ( 2) 00:16:22.509 22.566 - 22.661: 99.8449% ( 1) 00:16:22.509 23.799 - 23.893: 99.8523% ( 1) 00:16:22.509 29.772 - 29.961: 99.8597% ( 1) 00:16:22.509 3980.705 - 4004.978: 99.9483% ( 12) 00:16:22.509 4004.978 - 4029.250: 99.9926% ( 6) 00:16:22.509 4029.250 - 4053.523: 100.0000% ( 1) 00:16:22.509 00:16:22.509 Complete histogram 00:16:22.509 ================== 00:16:22.509 Range in us Cumulative Count 00:16:22.509 2.039 - 2.050: 1.5950% ( 216) 00:16:22.509 2.050 - 2.062: 11.4163% ( 1330) 00:16:22.509 2.062 - 2.074: 14.3849% ( 402) 00:16:22.509 2.074 - 2.086: 31.7235% ( 2348) 00:16:22.509 2.086 - 2.098: 58.2780% ( 3596) 00:16:22.509 2.098 - 2.110: 61.9554% ( 498) 00:16:22.509 2.110 - 2.121: 64.6433% ( 364) 00:16:22.509 2.121 - 2.133: 67.5897% ( 399) 00:16:22.509 2.133 - 2.145: 68.8672% ( 173) 00:16:22.509 2.145 - 2.157: 76.9606% ( 1096) 00:16:22.509 2.157 - 2.169: 85.3050% ( 1130) 00:16:22.509 2.169 - 2.181: 86.5529% ( 169) 00:16:22.509 2.181 - 2.193: 87.4243% ( 118) 00:16:22.509 2.193 - 2.204: 88.7314% ( 177) 00:16:22.509 2.204 - 2.216: 89.3221% ( 80) 00:16:22.509 2.216 - 2.228: 91.4636% ( 290) 00:16:22.509 2.228 - 2.240: 93.9521% ( 337) 00:16:22.509 2.240 - 2.252: 94.7127% ( 103) 00:16:22.509 2.252 - 2.264: 94.9934% ( 38) 00:16:22.509 2.264 - 2.276: 95.3552% ( 49) 00:16:22.509 2.276 - 2.287: 95.4290% ( 10) 00:16:22.509 2.287 - 2.299: 95.6284% ( 27) 00:16:22.509 2.299 - 2.311: 95.8647% ( 32) 00:16:22.509 2.311 - 2.323: 95.9312% ( 9) 00:16:22.509 2.323 - 2.335: 95.9829% ( 7) 00:16:22.509 2.335 - 2.347: 96.1158% ( 18) 00:16:22.509 2.347 - 2.359: 96.2856% ( 23) 00:16:22.509 2.359 - 2.370: 96.5219% ( 32) 00:16:22.509 2.370 - 2.382: 96.8099% ( 39) 00:16:22.509 2.382 - 2.394: 97.1791% ( 50) 00:16:22.509 2.394 - 2.406: 97.5558% ( 51) 00:16:22.509 2.406 - 2.418: 97.8068% ( 34) 00:16:22.509 2.418 - 2.430: 97.9840% ( 24) 00:16:22.509 2.430 - 2.441: 98.1096% ( 17) 00:16:22.509 2.441 - 2.453: 98.1834% ( 10) 00:16:22.509 2.453 - 2.465: 98.2204% ( 5) 00:16:22.509 2.465 - 2.477: 98.2720% ( 7) 00:16:22.509 2.477 - 2.489: 98.3459% ( 10) 00:16:22.509 2.489 - 2.501: 98.4197% ( 10) 00:16:22.509 2.501 - 2.513: 98.4640% ( 6) 00:16:22.509 2.513 - 2.524: 98.5010% ( 5) 00:16:22.509 2.524 - 2.536: 98.5379% ( 5) 00:16:22.509 2.536 - 2.548: 98.5453% ( 1) 00:16:22.509 2.548 - 2.560: 98.5527% ( 1) 00:16:22.509 2.584 - 2.596: 98.5600% ( 1) 00:16:22.509 2.596 - 2.607: 98.5674% ( 1) 00:16:22.509 2.619 - 2.631: 98.5748% ( 1) 00:16:22.509 2.631 - 2.643: 98.5896% ( 2) 00:16:22.509 2.655 - 2.667: 98.5970% ( 1) 00:16:22.509 2.667 - 2.679: 98.6043% ( 1) 00:16:22.509 2.690 - 2.702: 98.6117% ( 1) 00:16:22.509 2.714 - 2.726: 98.6265% ( 2) 00:16:22.509 2.726 - 2.738: 98.6413% ( 2) 00:16:22.509 2.963 - 2.975: 98.6486% ( 1) 00:16:22.510 3.413 - 3.437: 98.6560% ( 1) 00:16:22.510 3.437 - 3.461: 98.6634% ( 1) 00:16:22.510 3.461 - 3.484: 98.6782% ( 2) 00:16:22.510 3.508 - 3.532: 98.7003% ( 3) 00:16:22.510 3.532 - 3.556: 98.7077% ( 1) 00:16:22.510 3.556 - 3.579: 98.7225% ( 2) 00:16:22.510 3.579 - 3.603: 98.7299% ( 1) 00:16:22.510 3.603 - 3.627: 98.7446% ( 2) 00:16:22.510 3.627 - 3.650: 98.7520% ( 1) 00:16:22.510 3.650 - 3.674: 98.7594% ( 1) 00:16:22.510 3.698 - 3.721: 98.7742% ( 2) 00:16:22.510 3.721 - 3.745: 98.7816% ( 1) 00:16:22.510 3.769 - 3.793: 98.7963% ( 2) 00:16:22.510 3.816 - 3.840: 98.8111% ( 2) 00:16:22.510 3.864 - 3.887: 9[2024-04-24 05:11:59.402399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:22.510 8.8185% ( 1) 00:16:22.510 3.911 - 3.935: 98.8259% ( 1) 00:16:22.510 3.935 - 3.959: 98.8333% ( 1) 00:16:22.510 3.959 - 3.982: 98.8406% ( 1) 00:16:22.510 4.219 - 4.243: 98.8480% ( 1) 00:16:22.510 5.286 - 5.310: 98.8554% ( 1) 00:16:22.510 5.381 - 5.404: 98.8628% ( 1) 00:16:22.510 6.044 - 6.068: 98.8702% ( 1) 00:16:22.510 6.068 - 6.116: 98.8850% ( 2) 00:16:22.510 6.163 - 6.210: 98.8923% ( 1) 00:16:22.510 6.210 - 6.258: 98.8997% ( 1) 00:16:22.510 6.258 - 6.305: 98.9071% ( 1) 00:16:22.510 6.400 - 6.447: 98.9145% ( 1) 00:16:22.510 6.590 - 6.637: 98.9219% ( 1) 00:16:22.510 6.732 - 6.779: 98.9293% ( 1) 00:16:22.510 6.969 - 7.016: 98.9440% ( 2) 00:16:22.510 7.064 - 7.111: 98.9514% ( 1) 00:16:22.510 7.301 - 7.348: 98.9588% ( 1) 00:16:22.510 7.870 - 7.917: 98.9662% ( 1) 00:16:22.510 8.107 - 8.154: 98.9736% ( 1) 00:16:22.510 9.197 - 9.244: 98.9809% ( 1) 00:16:22.510 9.387 - 9.434: 98.9883% ( 1) 00:16:22.510 10.856 - 10.904: 98.9957% ( 1) 00:16:22.510 12.421 - 12.516: 99.0031% ( 1) 00:16:22.510 13.938 - 14.033: 99.0105% ( 1) 00:16:22.510 15.360 - 15.455: 99.0179% ( 1) 00:16:22.510 15.739 - 15.834: 99.0474% ( 4) 00:16:22.510 15.834 - 15.929: 99.0548% ( 1) 00:16:22.510 15.929 - 16.024: 99.0696% ( 2) 00:16:22.510 16.024 - 16.119: 99.0769% ( 1) 00:16:22.510 16.119 - 16.213: 99.0843% ( 1) 00:16:22.510 16.213 - 16.308: 99.0917% ( 1) 00:16:22.510 16.308 - 16.403: 99.1065% ( 2) 00:16:22.510 16.403 - 16.498: 99.1360% ( 4) 00:16:22.510 16.498 - 16.593: 99.1951% ( 8) 00:16:22.510 16.593 - 16.687: 99.2246% ( 4) 00:16:22.510 16.687 - 16.782: 99.2616% ( 5) 00:16:22.510 16.782 - 16.877: 99.2763% ( 2) 00:16:22.510 16.877 - 16.972: 99.3132% ( 5) 00:16:22.510 16.972 - 17.067: 99.3280% ( 2) 00:16:22.510 17.067 - 17.161: 99.3576% ( 4) 00:16:22.510 17.256 - 17.351: 99.3871% ( 4) 00:16:22.510 17.351 - 17.446: 99.3945% ( 1) 00:16:22.510 17.446 - 17.541: 99.4240% ( 4) 00:16:22.510 17.541 - 17.636: 99.4314% ( 1) 00:16:22.510 17.730 - 17.825: 99.4462% ( 2) 00:16:22.510 17.825 - 17.920: 99.4536% ( 1) 00:16:22.510 17.920 - 18.015: 99.4683% ( 2) 00:16:22.510 18.015 - 18.110: 99.4905% ( 3) 00:16:22.510 18.110 - 18.204: 99.4979% ( 1) 00:16:22.510 18.394 - 18.489: 99.5052% ( 1) 00:16:22.510 3980.705 - 4004.978: 99.7637% ( 35) 00:16:22.510 4004.978 - 4029.250: 99.9926% ( 31) 00:16:22.510 4975.881 - 5000.154: 100.0000% ( 1) 00:16:22.510 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:22.510 [ 00:16:22.510 { 00:16:22.510 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.510 "subtype": "Discovery", 00:16:22.510 "listen_addresses": [], 00:16:22.510 "allow_any_host": true, 00:16:22.510 "hosts": [] 00:16:22.510 }, 00:16:22.510 { 00:16:22.510 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:22.510 "subtype": "NVMe", 00:16:22.510 "listen_addresses": [ 00:16:22.510 { 00:16:22.510 "transport": "VFIOUSER", 00:16:22.510 "trtype": "VFIOUSER", 00:16:22.510 "adrfam": "IPv4", 00:16:22.510 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:22.510 "trsvcid": "0" 00:16:22.510 } 00:16:22.510 ], 00:16:22.510 "allow_any_host": true, 00:16:22.510 "hosts": [], 00:16:22.510 "serial_number": "SPDK1", 00:16:22.510 "model_number": "SPDK bdev Controller", 00:16:22.510 "max_namespaces": 32, 00:16:22.510 "min_cntlid": 1, 00:16:22.510 "max_cntlid": 65519, 00:16:22.510 "namespaces": [ 00:16:22.510 { 00:16:22.510 "nsid": 1, 00:16:22.510 "bdev_name": "Malloc1", 00:16:22.510 "name": "Malloc1", 00:16:22.510 "nguid": "5F5F9CD7F7214282BA85FF30F3F28AC6", 00:16:22.510 "uuid": "5f5f9cd7-f721-4282-ba85-ff30f3f28ac6" 00:16:22.510 }, 00:16:22.510 { 00:16:22.510 "nsid": 2, 00:16:22.510 "bdev_name": "Malloc3", 00:16:22.510 "name": "Malloc3", 00:16:22.510 "nguid": "47AC66A23657457BA731FE2ADB81BCF4", 00:16:22.510 "uuid": "47ac66a2-3657-457b-a731-fe2adb81bcf4" 00:16:22.510 } 00:16:22.510 ] 00:16:22.510 }, 00:16:22.510 { 00:16:22.510 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:22.510 "subtype": "NVMe", 00:16:22.510 "listen_addresses": [ 00:16:22.510 { 00:16:22.510 "transport": "VFIOUSER", 00:16:22.510 "trtype": "VFIOUSER", 00:16:22.510 "adrfam": "IPv4", 00:16:22.510 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:22.510 "trsvcid": "0" 00:16:22.510 } 00:16:22.510 ], 00:16:22.510 "allow_any_host": true, 00:16:22.510 "hosts": [], 00:16:22.510 "serial_number": "SPDK2", 00:16:22.510 "model_number": "SPDK bdev Controller", 00:16:22.510 "max_namespaces": 32, 00:16:22.510 "min_cntlid": 1, 00:16:22.510 "max_cntlid": 65519, 00:16:22.510 "namespaces": [ 00:16:22.510 { 00:16:22.510 "nsid": 1, 00:16:22.510 "bdev_name": "Malloc2", 00:16:22.510 "name": "Malloc2", 00:16:22.510 "nguid": "50618D0A3DF34C5BAC6DE3409BD90D5B", 00:16:22.510 "uuid": "50618d0a-3df3-4c5b-ac6d-e3409bd90d5b" 00:16:22.510 } 00:16:22.510 ] 00:16:22.510 } 00:16:22.510 ] 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1861899 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:22.510 05:11:59 -- common/autotest_common.sh@1251 -- # local i=0 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:22.510 05:11:59 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.510 05:11:59 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.510 05:11:59 -- common/autotest_common.sh@1262 -- # return 0 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:22.510 05:11:59 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:22.510 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.769 [2024-04-24 05:11:59.845123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.769 Malloc4 00:16:22.769 05:11:59 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:23.027 [2024-04-24 05:12:00.214053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.027 05:12:00 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.027 Asynchronous Event Request test 00:16:23.027 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.027 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.027 Registering asynchronous event callbacks... 00:16:23.027 Starting namespace attribute notice tests for all controllers... 00:16:23.027 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:23.027 aer_cb - Changed Namespace 00:16:23.027 Cleaning up... 00:16:23.285 [ 00:16:23.285 { 00:16:23.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.285 "subtype": "Discovery", 00:16:23.285 "listen_addresses": [], 00:16:23.285 "allow_any_host": true, 00:16:23.285 "hosts": [] 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.285 "subtype": "NVMe", 00:16:23.285 "listen_addresses": [ 00:16:23.285 { 00:16:23.285 "transport": "VFIOUSER", 00:16:23.285 "trtype": "VFIOUSER", 00:16:23.285 "adrfam": "IPv4", 00:16:23.285 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.285 "trsvcid": "0" 00:16:23.285 } 00:16:23.285 ], 00:16:23.285 "allow_any_host": true, 00:16:23.285 "hosts": [], 00:16:23.285 "serial_number": "SPDK1", 00:16:23.285 "model_number": "SPDK bdev Controller", 00:16:23.285 "max_namespaces": 32, 00:16:23.285 "min_cntlid": 1, 00:16:23.285 "max_cntlid": 65519, 00:16:23.285 "namespaces": [ 00:16:23.285 { 00:16:23.285 "nsid": 1, 00:16:23.285 "bdev_name": "Malloc1", 00:16:23.285 "name": "Malloc1", 00:16:23.285 "nguid": "5F5F9CD7F7214282BA85FF30F3F28AC6", 00:16:23.285 "uuid": "5f5f9cd7-f721-4282-ba85-ff30f3f28ac6" 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "nsid": 2, 00:16:23.285 "bdev_name": "Malloc3", 00:16:23.285 "name": "Malloc3", 00:16:23.285 "nguid": "47AC66A23657457BA731FE2ADB81BCF4", 00:16:23.285 "uuid": "47ac66a2-3657-457b-a731-fe2adb81bcf4" 00:16:23.285 } 00:16:23.285 ] 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.285 "subtype": "NVMe", 00:16:23.285 "listen_addresses": [ 00:16:23.285 { 00:16:23.285 "transport": "VFIOUSER", 00:16:23.285 "trtype": "VFIOUSER", 00:16:23.285 "adrfam": "IPv4", 00:16:23.285 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.285 "trsvcid": "0" 00:16:23.285 } 00:16:23.285 ], 00:16:23.285 "allow_any_host": true, 00:16:23.285 "hosts": [], 00:16:23.285 "serial_number": "SPDK2", 00:16:23.285 "model_number": "SPDK bdev Controller", 00:16:23.285 "max_namespaces": 32, 00:16:23.285 "min_cntlid": 1, 00:16:23.285 "max_cntlid": 65519, 00:16:23.285 "namespaces": [ 00:16:23.285 { 00:16:23.285 "nsid": 1, 00:16:23.285 "bdev_name": "Malloc2", 00:16:23.285 "name": "Malloc2", 00:16:23.285 "nguid": "50618D0A3DF34C5BAC6DE3409BD90D5B", 00:16:23.285 "uuid": "50618d0a-3df3-4c5b-ac6d-e3409bd90d5b" 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "nsid": 2, 00:16:23.285 "bdev_name": "Malloc4", 00:16:23.285 "name": "Malloc4", 00:16:23.285 "nguid": "3EDBCD4AA2AC44DFA89954EB383D7ECB", 00:16:23.285 "uuid": "3edbcd4a-a2ac-44df-a899-54eb383d7ecb" 00:16:23.285 } 00:16:23.285 ] 00:16:23.285 } 00:16:23.285 ] 00:16:23.285 05:12:00 -- target/nvmf_vfio_user.sh@44 -- # wait 1861899 00:16:23.285 05:12:00 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:23.285 05:12:00 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1856311 00:16:23.285 05:12:00 -- common/autotest_common.sh@936 -- # '[' -z 1856311 ']' 00:16:23.285 05:12:00 -- common/autotest_common.sh@940 -- # kill -0 1856311 00:16:23.285 05:12:00 -- common/autotest_common.sh@941 -- # uname 00:16:23.285 05:12:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.285 05:12:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1856311 00:16:23.285 05:12:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:23.285 05:12:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:23.285 05:12:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1856311' 00:16:23.285 killing process with pid 1856311 00:16:23.285 05:12:00 -- common/autotest_common.sh@955 -- # kill 1856311 00:16:23.285 [2024-04-24 05:12:00.512360] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:23.285 05:12:00 -- common/autotest_common.sh@960 -- # wait 1856311 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1862097 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1862097' 00:16:23.854 Process pid: 1862097 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:23.854 05:12:00 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1862097 00:16:23.854 05:12:00 -- common/autotest_common.sh@817 -- # '[' -z 1862097 ']' 00:16:23.854 05:12:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.854 05:12:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:23.854 05:12:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.854 05:12:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:23.854 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:16:23.854 [2024-04-24 05:12:00.882546] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:23.854 [2024-04-24 05:12:00.883591] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:16:23.854 [2024-04-24 05:12:00.883686] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.854 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.854 [2024-04-24 05:12:00.916735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:23.854 [2024-04-24 05:12:00.944592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.854 [2024-04-24 05:12:01.028468] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.854 [2024-04-24 05:12:01.028524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.854 [2024-04-24 05:12:01.028553] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.854 [2024-04-24 05:12:01.028566] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.854 [2024-04-24 05:12:01.028576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.854 [2024-04-24 05:12:01.028748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.855 [2024-04-24 05:12:01.028796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.855 [2024-04-24 05:12:01.028855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.855 [2024-04-24 05:12:01.028858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.112 [2024-04-24 05:12:01.137729] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:24.112 [2024-04-24 05:12:01.137956] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:24.112 [2024-04-24 05:12:01.138199] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:24.112 [2024-04-24 05:12:01.138882] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:24.112 [2024-04-24 05:12:01.138988] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:24.112 05:12:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.112 05:12:01 -- common/autotest_common.sh@850 -- # return 0 00:16:24.112 05:12:01 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:25.049 05:12:02 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:25.307 05:12:02 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:25.307 05:12:02 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:25.307 05:12:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.307 05:12:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:25.307 05:12:02 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:25.565 Malloc1 00:16:25.565 05:12:02 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:25.823 05:12:02 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:26.081 05:12:03 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:26.339 05:12:03 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:26.339 05:12:03 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:26.339 05:12:03 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:26.597 Malloc2 00:16:26.597 05:12:03 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:26.856 05:12:03 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:27.114 05:12:04 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:27.372 05:12:04 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:27.372 05:12:04 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1862097 00:16:27.372 05:12:04 -- common/autotest_common.sh@936 -- # '[' -z 1862097 ']' 00:16:27.372 05:12:04 -- common/autotest_common.sh@940 -- # kill -0 1862097 00:16:27.372 05:12:04 -- common/autotest_common.sh@941 -- # uname 00:16:27.372 05:12:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:27.372 05:12:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1862097 00:16:27.372 05:12:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:27.372 05:12:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:27.372 05:12:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1862097' 00:16:27.372 killing process with pid 1862097 00:16:27.372 05:12:04 -- common/autotest_common.sh@955 -- # kill 1862097 00:16:27.372 05:12:04 -- common/autotest_common.sh@960 -- # wait 1862097 00:16:27.631 [2024-04-24 05:12:04.696212] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:16:27.631 05:12:04 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:27.631 05:12:04 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:27.631 00:16:27.631 real 0m52.832s 00:16:27.631 user 3m28.726s 00:16:27.631 sys 0m4.311s 00:16:27.631 05:12:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.631 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:27.631 ************************************ 00:16:27.631 END TEST nvmf_vfio_user 00:16:27.631 ************************************ 00:16:27.631 05:12:04 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:27.631 05:12:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:27.631 05:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.631 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:27.889 ************************************ 00:16:27.889 START TEST nvmf_vfio_user_nvme_compliance 00:16:27.889 ************************************ 00:16:27.889 05:12:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:27.889 * Looking for test storage... 00:16:27.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:27.889 05:12:04 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.889 05:12:04 -- nvmf/common.sh@7 -- # uname -s 00:16:27.889 05:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.889 05:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.889 05:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.889 05:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.889 05:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.889 05:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.889 05:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.889 05:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.889 05:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.889 05:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.889 05:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.889 05:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.889 05:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.889 05:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.889 05:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.889 05:12:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.889 05:12:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.889 05:12:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.889 05:12:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.889 05:12:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.890 05:12:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.890 05:12:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.890 05:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.890 05:12:04 -- paths/export.sh@5 -- # export PATH 00:16:27.890 05:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.890 05:12:04 -- nvmf/common.sh@47 -- # : 0 00:16:27.890 05:12:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.890 05:12:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.890 05:12:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.890 05:12:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.890 05:12:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.890 05:12:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.890 05:12:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.890 05:12:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.890 05:12:04 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.890 05:12:04 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.890 05:12:04 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:27.890 05:12:04 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:27.890 05:12:04 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:27.890 05:12:04 -- compliance/compliance.sh@20 -- # nvmfpid=1862755 00:16:27.890 05:12:04 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:27.890 05:12:04 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1862755' 00:16:27.890 Process pid: 1862755 00:16:27.890 05:12:04 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.890 05:12:04 -- compliance/compliance.sh@24 -- # waitforlisten 1862755 00:16:27.890 05:12:04 -- common/autotest_common.sh@817 -- # '[' -z 1862755 ']' 00:16:27.890 05:12:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.890 05:12:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.890 05:12:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.890 05:12:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.890 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 [2024-04-24 05:12:05.027877] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:16:27.890 [2024-04-24 05:12:05.028006] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.890 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.890 [2024-04-24 05:12:05.063511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.890 [2024-04-24 05:12:05.093523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.148 [2024-04-24 05:12:05.183312] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.148 [2024-04-24 05:12:05.183370] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.148 [2024-04-24 05:12:05.183383] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.148 [2024-04-24 05:12:05.183395] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.148 [2024-04-24 05:12:05.183405] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.148 [2024-04-24 05:12:05.183465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.148 [2024-04-24 05:12:05.183494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.148 [2024-04-24 05:12:05.183497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.148 05:12:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.148 05:12:05 -- common/autotest_common.sh@850 -- # return 0 00:16:28.148 05:12:05 -- compliance/compliance.sh@26 -- # sleep 1 00:16:29.086 05:12:06 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:29.086 05:12:06 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:29.086 05:12:06 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:29.086 05:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 05:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 05:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 05:12:06 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:29.086 05:12:06 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:29.086 05:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 05:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 malloc0 00:16:29.086 05:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 05:12:06 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:29.086 05:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 05:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 05:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 05:12:06 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:29.086 05:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 05:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.346 05:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.346 05:12:06 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:29.346 05:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.346 05:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.346 05:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.346 05:12:06 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:29.346 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.346 00:16:29.346 00:16:29.346 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.346 http://cunit.sourceforge.net/ 00:16:29.346 00:16:29.346 00:16:29.346 Suite: nvme_compliance 00:16:29.346 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 05:12:06.521597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.346 [2024-04-24 05:12:06.523079] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:29.346 [2024-04-24 05:12:06.523103] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:29.346 [2024-04-24 05:12:06.523119] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:29.346 [2024-04-24 05:12:06.524638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.346 passed 00:16:29.346 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 05:12:06.611239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.347 [2024-04-24 05:12:06.614260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.606 passed 00:16:29.606 Test: admin_identify_ns ...[2024-04-24 05:12:06.701919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.606 [2024-04-24 05:12:06.761643] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:29.606 [2024-04-24 05:12:06.769657] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:29.606 [2024-04-24 05:12:06.790789] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.606 passed 00:16:29.606 Test: admin_get_features_mandatory_features ...[2024-04-24 05:12:06.873914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.864 [2024-04-24 05:12:06.877950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.864 passed 00:16:29.864 Test: admin_get_features_optional_features ...[2024-04-24 05:12:06.962487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.864 [2024-04-24 05:12:06.965505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.864 passed 00:16:29.864 Test: admin_set_features_number_of_queues ...[2024-04-24 05:12:07.047294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.123 [2024-04-24 05:12:07.150754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.123 passed 00:16:30.123 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 05:12:07.234875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.123 [2024-04-24 05:12:07.237899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.123 passed 00:16:30.123 Test: admin_get_log_page_with_lpo ...[2024-04-24 05:12:07.319147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.123 [2024-04-24 05:12:07.385640] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:30.412 [2024-04-24 05:12:07.398725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.412 passed 00:16:30.412 Test: fabric_property_get ...[2024-04-24 05:12:07.481264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.412 [2024-04-24 05:12:07.485902] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:30.412 [2024-04-24 05:12:07.487302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.412 passed 00:16:30.412 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 05:12:07.570844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.412 [2024-04-24 05:12:07.572155] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:30.412 [2024-04-24 05:12:07.573867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.412 passed 00:16:30.683 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 05:12:07.660363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.683 [2024-04-24 05:12:07.743649] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.683 [2024-04-24 05:12:07.759637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.683 [2024-04-24 05:12:07.764743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.683 passed 00:16:30.683 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 05:12:07.849648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.683 [2024-04-24 05:12:07.850963] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:30.684 [2024-04-24 05:12:07.852687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.684 passed 00:16:30.684 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 05:12:07.938455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.941 [2024-04-24 05:12:08.013642] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:30.941 [2024-04-24 05:12:08.037638] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.941 [2024-04-24 05:12:08.042747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.941 passed 00:16:30.941 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 05:12:08.129377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.941 [2024-04-24 05:12:08.130735] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:30.941 [2024-04-24 05:12:08.130782] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:30.941 [2024-04-24 05:12:08.132403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.941 passed 00:16:31.199 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 05:12:08.218889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.199 [2024-04-24 05:12:08.314652] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:31.199 [2024-04-24 05:12:08.322654] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:31.199 [2024-04-24 05:12:08.330664] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:31.199 [2024-04-24 05:12:08.335636] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:31.199 [2024-04-24 05:12:08.362752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.199 passed 00:16:31.199 Test: admin_create_io_sq_verify_pc ...[2024-04-24 05:12:08.448959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.199 [2024-04-24 05:12:08.465656] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:31.457 [2024-04-24 05:12:08.483345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.457 passed 00:16:31.457 Test: admin_create_io_qp_max_qps ...[2024-04-24 05:12:08.570898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.832 [2024-04-24 05:12:09.674644] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:32.832 [2024-04-24 05:12:10.058362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.832 passed 00:16:33.091 Test: admin_create_io_sq_shared_cq ...[2024-04-24 05:12:10.143005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.091 [2024-04-24 05:12:10.281634] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:33.091 [2024-04-24 05:12:10.318725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.091 passed 00:16:33.091 00:16:33.091 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.091 suites 1 1 n/a 0 0 00:16:33.091 tests 18 18 18 0 0 00:16:33.091 asserts 360 360 360 0 n/a 00:16:33.091 00:16:33.091 Elapsed time = 1.577 seconds 00:16:33.351 05:12:10 -- compliance/compliance.sh@42 -- # killprocess 1862755 00:16:33.351 05:12:10 -- common/autotest_common.sh@936 -- # '[' -z 1862755 ']' 00:16:33.351 05:12:10 -- common/autotest_common.sh@940 -- # kill -0 1862755 00:16:33.351 05:12:10 -- common/autotest_common.sh@941 -- # uname 00:16:33.351 05:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.351 05:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1862755 00:16:33.351 05:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:33.351 05:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:33.351 05:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1862755' 00:16:33.351 killing process with pid 1862755 00:16:33.351 05:12:10 -- common/autotest_common.sh@955 -- # kill 1862755 00:16:33.351 05:12:10 -- common/autotest_common.sh@960 -- # wait 1862755 00:16:33.610 05:12:10 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:33.610 05:12:10 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:33.610 00:16:33.610 real 0m5.741s 00:16:33.610 user 0m16.191s 00:16:33.610 sys 0m0.534s 00:16:33.610 05:12:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.610 05:12:10 -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 ************************************ 00:16:33.610 END TEST nvmf_vfio_user_nvme_compliance 00:16:33.610 ************************************ 00:16:33.610 05:12:10 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:33.610 05:12:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:33.610 05:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.610 05:12:10 -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 ************************************ 00:16:33.610 START TEST nvmf_vfio_user_fuzz 00:16:33.610 ************************************ 00:16:33.610 05:12:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:33.610 * Looking for test storage... 00:16:33.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.610 05:12:10 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.610 05:12:10 -- nvmf/common.sh@7 -- # uname -s 00:16:33.610 05:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.610 05:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.610 05:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.610 05:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.610 05:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.610 05:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.610 05:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.610 05:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.610 05:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.610 05:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.610 05:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.610 05:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.610 05:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.610 05:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.610 05:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.610 05:12:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.610 05:12:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.610 05:12:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.610 05:12:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.610 05:12:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.610 05:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.610 05:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.610 05:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.610 05:12:10 -- paths/export.sh@5 -- # export PATH 00:16:33.610 05:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.610 05:12:10 -- nvmf/common.sh@47 -- # : 0 00:16:33.610 05:12:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.610 05:12:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.610 05:12:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.610 05:12:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.610 05:12:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.610 05:12:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.610 05:12:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.611 05:12:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1863993 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1863993' 00:16:33.611 Process pid: 1863993 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.611 05:12:10 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1863993 00:16:33.611 05:12:10 -- common/autotest_common.sh@817 -- # '[' -z 1863993 ']' 00:16:33.611 05:12:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.611 05:12:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.611 05:12:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.611 05:12:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.611 05:12:10 -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 05:12:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.179 05:12:11 -- common/autotest_common.sh@850 -- # return 0 00:16:34.179 05:12:11 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:35.114 05:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.114 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.114 05:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:35.114 05:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.114 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.114 malloc0 00:16:35.114 05:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:35.114 05:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.114 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.114 05:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:35.114 05:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.114 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.114 05:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:35.114 05:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.114 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.114 05:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.114 05:12:12 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:35.115 05:12:12 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:07.179 Fuzzing completed. Shutting down the fuzz application 00:17:07.179 00:17:07.179 Dumping successful admin opcodes: 00:17:07.179 8, 9, 10, 24, 00:17:07.179 Dumping successful io opcodes: 00:17:07.179 0, 00:17:07.179 NS: 0x200003a1ef00 I/O qp, Total commands completed: 562777, total successful commands: 2161, random_seed: 3175911040 00:17:07.179 NS: 0x200003a1ef00 admin qp, Total commands completed: 72190, total successful commands: 569, random_seed: 1043439424 00:17:07.179 05:12:42 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:07.179 05:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.179 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 05:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.179 05:12:42 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1863993 00:17:07.179 05:12:42 -- common/autotest_common.sh@936 -- # '[' -z 1863993 ']' 00:17:07.179 05:12:42 -- common/autotest_common.sh@940 -- # kill -0 1863993 00:17:07.179 05:12:42 -- common/autotest_common.sh@941 -- # uname 00:17:07.179 05:12:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.179 05:12:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1863993 00:17:07.179 05:12:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:07.179 05:12:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:07.179 05:12:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1863993' 00:17:07.179 killing process with pid 1863993 00:17:07.179 05:12:42 -- common/autotest_common.sh@955 -- # kill 1863993 00:17:07.179 05:12:42 -- common/autotest_common.sh@960 -- # wait 1863993 00:17:07.179 05:12:42 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:07.179 05:12:42 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:07.179 00:17:07.179 real 0m32.191s 00:17:07.179 user 0m30.977s 00:17:07.179 sys 0m28.164s 00:17:07.179 05:12:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.179 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 ************************************ 00:17:07.179 END TEST nvmf_vfio_user_fuzz 00:17:07.179 ************************************ 00:17:07.179 05:12:42 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:07.179 05:12:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:07.179 05:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.179 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 ************************************ 00:17:07.179 START TEST nvmf_host_management 00:17:07.179 ************************************ 00:17:07.179 05:12:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:07.179 * Looking for test storage... 00:17:07.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.179 05:12:43 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.179 05:12:43 -- nvmf/common.sh@7 -- # uname -s 00:17:07.179 05:12:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.179 05:12:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.179 05:12:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.179 05:12:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.179 05:12:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.179 05:12:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.179 05:12:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.179 05:12:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.179 05:12:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.179 05:12:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.179 05:12:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.179 05:12:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.179 05:12:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.179 05:12:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.179 05:12:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.179 05:12:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.179 05:12:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.179 05:12:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.179 05:12:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.179 05:12:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.179 05:12:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.180 05:12:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.180 05:12:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.180 05:12:43 -- paths/export.sh@5 -- # export PATH 00:17:07.180 05:12:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.180 05:12:43 -- nvmf/common.sh@47 -- # : 0 00:17:07.180 05:12:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.180 05:12:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.180 05:12:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.180 05:12:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.180 05:12:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.180 05:12:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.180 05:12:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.180 05:12:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.180 05:12:43 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.180 05:12:43 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.180 05:12:43 -- target/host_management.sh@105 -- # nvmftestinit 00:17:07.180 05:12:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:07.180 05:12:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.180 05:12:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:07.180 05:12:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:07.180 05:12:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:07.180 05:12:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.180 05:12:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.180 05:12:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.180 05:12:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:07.180 05:12:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:07.180 05:12:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.180 05:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.113 05:12:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.113 05:12:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.113 05:12:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.113 05:12:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.113 05:12:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.113 05:12:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.113 05:12:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.113 05:12:45 -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.113 05:12:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.113 05:12:45 -- nvmf/common.sh@296 -- # e810=() 00:17:08.113 05:12:45 -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.113 05:12:45 -- nvmf/common.sh@297 -- # x722=() 00:17:08.113 05:12:45 -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.113 05:12:45 -- nvmf/common.sh@298 -- # mlx=() 00:17:08.113 05:12:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.113 05:12:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.113 05:12:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.114 05:12:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.114 05:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.114 05:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.114 05:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.114 05:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.114 05:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.114 05:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.114 05:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.114 05:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.114 05:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.114 05:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.114 05:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.114 05:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:08.114 05:12:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:08.114 05:12:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.114 05:12:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.114 05:12:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.114 05:12:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.114 05:12:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.114 05:12:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.114 05:12:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.114 05:12:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.114 05:12:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.114 05:12:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.114 05:12:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.114 05:12:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.114 05:12:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.114 05:12:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.114 05:12:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.114 05:12:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.114 05:12:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.114 05:12:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.114 05:12:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:17:08.114 00:17:08.114 --- 10.0.0.2 ping statistics --- 00:17:08.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.114 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:17:08.114 05:12:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:08.114 00:17:08.114 --- 10.0.0.1 ping statistics --- 00:17:08.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.114 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:08.114 05:12:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.114 05:12:45 -- nvmf/common.sh@411 -- # return 0 00:17:08.114 05:12:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:08.114 05:12:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.114 05:12:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:08.114 05:12:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.114 05:12:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:08.114 05:12:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:08.114 05:12:45 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:17:08.114 05:12:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:08.114 05:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.114 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.372 ************************************ 00:17:08.372 START TEST nvmf_host_management 00:17:08.372 ************************************ 00:17:08.372 05:12:45 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:17:08.372 05:12:45 -- target/host_management.sh@69 -- # starttarget 00:17:08.372 05:12:45 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:08.372 05:12:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:08.372 05:12:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:08.372 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.372 05:12:45 -- nvmf/common.sh@470 -- # nvmfpid=1869453 00:17:08.372 05:12:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:08.372 05:12:45 -- nvmf/common.sh@471 -- # waitforlisten 1869453 00:17:08.372 05:12:45 -- common/autotest_common.sh@817 -- # '[' -z 1869453 ']' 00:17:08.372 05:12:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.372 05:12:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.372 05:12:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.372 05:12:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.372 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.372 [2024-04-24 05:12:45.479890] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:08.372 [2024-04-24 05:12:45.479986] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.372 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.372 [2024-04-24 05:12:45.519151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:08.372 [2024-04-24 05:12:45.551426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.631 [2024-04-24 05:12:45.645817] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.631 [2024-04-24 05:12:45.645883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.631 [2024-04-24 05:12:45.645910] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.631 [2024-04-24 05:12:45.645924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.631 [2024-04-24 05:12:45.645936] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.631 [2024-04-24 05:12:45.646033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.631 [2024-04-24 05:12:45.646096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.631 [2024-04-24 05:12:45.646126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.631 [2024-04-24 05:12:45.646128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.631 05:12:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.631 05:12:45 -- common/autotest_common.sh@850 -- # return 0 00:17:08.631 05:12:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:08.631 05:12:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 05:12:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.631 05:12:45 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.631 05:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 [2024-04-24 05:12:45.796197] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.631 05:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.631 05:12:45 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:08.631 05:12:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 05:12:45 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:08.631 05:12:45 -- target/host_management.sh@23 -- # cat 00:17:08.631 05:12:45 -- target/host_management.sh@30 -- # rpc_cmd 00:17:08.631 05:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 Malloc0 00:17:08.631 [2024-04-24 05:12:45.855301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.631 05:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.631 05:12:45 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:08.631 05:12:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 05:12:45 -- target/host_management.sh@73 -- # perfpid=1869509 00:17:08.631 05:12:45 -- target/host_management.sh@74 -- # waitforlisten 1869509 /var/tmp/bdevperf.sock 00:17:08.631 05:12:45 -- common/autotest_common.sh@817 -- # '[' -z 1869509 ']' 00:17:08.631 05:12:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.631 05:12:45 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:08.631 05:12:45 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:08.631 05:12:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.631 05:12:45 -- nvmf/common.sh@521 -- # config=() 00:17:08.631 05:12:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.631 05:12:45 -- nvmf/common.sh@521 -- # local subsystem config 00:17:08.631 05:12:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.631 05:12:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.631 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.631 05:12:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.631 { 00:17:08.631 "params": { 00:17:08.631 "name": "Nvme$subsystem", 00:17:08.631 "trtype": "$TEST_TRANSPORT", 00:17:08.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.631 "adrfam": "ipv4", 00:17:08.631 "trsvcid": "$NVMF_PORT", 00:17:08.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.631 "hdgst": ${hdgst:-false}, 00:17:08.631 "ddgst": ${ddgst:-false} 00:17:08.631 }, 00:17:08.631 "method": "bdev_nvme_attach_controller" 00:17:08.631 } 00:17:08.631 EOF 00:17:08.631 )") 00:17:08.631 05:12:45 -- nvmf/common.sh@543 -- # cat 00:17:08.631 05:12:45 -- nvmf/common.sh@545 -- # jq . 00:17:08.631 05:12:45 -- nvmf/common.sh@546 -- # IFS=, 00:17:08.631 05:12:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:08.631 "params": { 00:17:08.631 "name": "Nvme0", 00:17:08.631 "trtype": "tcp", 00:17:08.631 "traddr": "10.0.0.2", 00:17:08.631 "adrfam": "ipv4", 00:17:08.631 "trsvcid": "4420", 00:17:08.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:08.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:08.631 "hdgst": false, 00:17:08.631 "ddgst": false 00:17:08.631 }, 00:17:08.631 "method": "bdev_nvme_attach_controller" 00:17:08.631 }' 00:17:08.890 [2024-04-24 05:12:45.926337] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:08.890 [2024-04-24 05:12:45.926426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869509 ] 00:17:08.890 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.890 [2024-04-24 05:12:45.962182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:08.890 [2024-04-24 05:12:45.991303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.890 [2024-04-24 05:12:46.076080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.149 Running I/O for 10 seconds... 00:17:09.149 05:12:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.149 05:12:46 -- common/autotest_common.sh@850 -- # return 0 00:17:09.149 05:12:46 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:09.149 05:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.149 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:17:09.149 05:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.149 05:12:46 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.149 05:12:46 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:09.149 05:12:46 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:09.149 05:12:46 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:09.149 05:12:46 -- target/host_management.sh@52 -- # local ret=1 00:17:09.149 05:12:46 -- target/host_management.sh@53 -- # local i 00:17:09.149 05:12:46 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:09.149 05:12:46 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:09.149 05:12:46 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:09.149 05:12:46 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.149 05:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.149 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:17:09.149 05:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.149 05:12:46 -- target/host_management.sh@55 -- # read_io_count=3 00:17:09.149 05:12:46 -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:17:09.149 05:12:46 -- target/host_management.sh@62 -- # sleep 0.25 00:17:09.413 05:12:46 -- target/host_management.sh@54 -- # (( i-- )) 00:17:09.413 05:12:46 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:09.413 05:12:46 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:09.413 05:12:46 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.413 05:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.413 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:17:09.413 05:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.413 05:12:46 -- target/host_management.sh@55 -- # read_io_count=387 00:17:09.413 05:12:46 -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:17:09.413 05:12:46 -- target/host_management.sh@59 -- # ret=0 00:17:09.413 05:12:46 -- target/host_management.sh@60 -- # break 00:17:09.413 05:12:46 -- target/host_management.sh@64 -- # return 0 00:17:09.413 05:12:46 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:09.413 05:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.413 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:17:09.413 [2024-04-24 05:12:46.631555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.631988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.413 [2024-04-24 05:12:46.632263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069290 is same with the state(5) to be set 00:17:09.414 [2024-04-24 05:12:46.632596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.632983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.632997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.414 [2024-04-24 05:12:46.633695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.414 [2024-04-24 05:12:46.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.633977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.633993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 05:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.415 [2024-04-24 05:12:46.634281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 05:12:46 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:09.415 [2024-04-24 05:12:46.634429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 05:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.415 [2024-04-24 05:12:46.634556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:17:09.415 [2024-04-24 05:12:46.634679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.415 [2024-04-24 05:12:46.634759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.415 [2024-04-24 05:12:46.634775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daa510 is same with the state(5) to be set 00:17:09.415 [2024-04-24 05:12:46.634856] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1daa510 was disconnected and freed. reset controller. 00:17:09.415 [2024-04-24 05:12:46.636040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:09.415 task offset: 49152 on job bdev=Nvme0n1 fails 00:17:09.415 00:17:09.415 Latency(us) 00:17:09.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.415 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.415 Job: Nvme0n1 ended in about 0.39 seconds with error 00:17:09.415 Verification LBA range: start 0x0 length 0x400 00:17:09.415 Nvme0n1 : 0.39 996.03 62.25 166.01 0.00 53582.29 12718.84 46409.20 00:17:09.415 =================================================================================================================== 00:17:09.415 Total : 996.03 62.25 166.01 0.00 53582.29 12718.84 46409.20 00:17:09.415 [2024-04-24 05:12:46.638326] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:09.415 [2024-04-24 05:12:46.638357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1999610 (9): Bad file descriptor 00:17:09.415 05:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.415 05:12:46 -- target/host_management.sh@87 -- # sleep 1 00:17:09.415 [2024-04-24 05:12:46.650783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.808 05:12:47 -- target/host_management.sh@91 -- # kill -9 1869509 00:17:10.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1869509) - No such process 00:17:10.808 05:12:47 -- target/host_management.sh@91 -- # true 00:17:10.808 05:12:47 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:10.808 05:12:47 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:10.808 05:12:47 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:10.808 05:12:47 -- nvmf/common.sh@521 -- # config=() 00:17:10.808 05:12:47 -- nvmf/common.sh@521 -- # local subsystem config 00:17:10.808 05:12:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.808 05:12:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.808 { 00:17:10.808 "params": { 00:17:10.808 "name": "Nvme$subsystem", 00:17:10.808 "trtype": "$TEST_TRANSPORT", 00:17:10.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.808 "adrfam": "ipv4", 00:17:10.808 "trsvcid": "$NVMF_PORT", 00:17:10.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.808 "hdgst": ${hdgst:-false}, 00:17:10.808 "ddgst": ${ddgst:-false} 00:17:10.808 }, 00:17:10.808 "method": "bdev_nvme_attach_controller" 00:17:10.808 } 00:17:10.808 EOF 00:17:10.808 )") 00:17:10.808 05:12:47 -- nvmf/common.sh@543 -- # cat 00:17:10.808 05:12:47 -- nvmf/common.sh@545 -- # jq . 00:17:10.808 05:12:47 -- nvmf/common.sh@546 -- # IFS=, 00:17:10.808 05:12:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:10.808 "params": { 00:17:10.808 "name": "Nvme0", 00:17:10.808 "trtype": "tcp", 00:17:10.808 "traddr": "10.0.0.2", 00:17:10.808 "adrfam": "ipv4", 00:17:10.808 "trsvcid": "4420", 00:17:10.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:10.808 "hdgst": false, 00:17:10.808 "ddgst": false 00:17:10.808 }, 00:17:10.808 "method": "bdev_nvme_attach_controller" 00:17:10.808 }' 00:17:10.808 [2024-04-24 05:12:47.685272] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:10.808 [2024-04-24 05:12:47.685349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869775 ] 00:17:10.808 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.808 [2024-04-24 05:12:47.716903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:10.808 [2024-04-24 05:12:47.745966] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.808 [2024-04-24 05:12:47.832646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.067 Running I/O for 1 seconds... 00:17:12.003 00:17:12.003 Latency(us) 00:17:12.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.003 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.003 Verification LBA range: start 0x0 length 0x400 00:17:12.003 Nvme0n1 : 1.04 1214.52 75.91 0.00 0.00 51641.68 8009.96 44467.39 00:17:12.003 =================================================================================================================== 00:17:12.003 Total : 1214.52 75.91 0.00 0.00 51641.68 8009.96 44467.39 00:17:12.261 05:12:49 -- target/host_management.sh@102 -- # stoptarget 00:17:12.261 05:12:49 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:12.261 05:12:49 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:12.261 05:12:49 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:12.261 05:12:49 -- target/host_management.sh@40 -- # nvmftestfini 00:17:12.261 05:12:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:12.262 05:12:49 -- nvmf/common.sh@117 -- # sync 00:17:12.262 05:12:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.262 05:12:49 -- nvmf/common.sh@120 -- # set +e 00:17:12.262 05:12:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.262 05:12:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.262 rmmod nvme_tcp 00:17:12.262 rmmod nvme_fabrics 00:17:12.262 rmmod nvme_keyring 00:17:12.262 05:12:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.520 05:12:49 -- nvmf/common.sh@124 -- # set -e 00:17:12.520 05:12:49 -- nvmf/common.sh@125 -- # return 0 00:17:12.520 05:12:49 -- nvmf/common.sh@478 -- # '[' -n 1869453 ']' 00:17:12.520 05:12:49 -- nvmf/common.sh@479 -- # killprocess 1869453 00:17:12.520 05:12:49 -- common/autotest_common.sh@936 -- # '[' -z 1869453 ']' 00:17:12.520 05:12:49 -- common/autotest_common.sh@940 -- # kill -0 1869453 00:17:12.520 05:12:49 -- common/autotest_common.sh@941 -- # uname 00:17:12.520 05:12:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.520 05:12:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1869453 00:17:12.520 05:12:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:12.520 05:12:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:12.520 05:12:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1869453' 00:17:12.520 killing process with pid 1869453 00:17:12.520 05:12:49 -- common/autotest_common.sh@955 -- # kill 1869453 00:17:12.520 05:12:49 -- common/autotest_common.sh@960 -- # wait 1869453 00:17:12.520 [2024-04-24 05:12:49.787634] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:12.778 05:12:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:12.778 05:12:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:12.778 05:12:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:12.778 05:12:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.778 05:12:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.778 05:12:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.778 05:12:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.778 05:12:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.679 05:12:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.679 00:17:14.679 real 0m6.432s 00:17:14.679 user 0m18.239s 00:17:14.679 sys 0m1.266s 00:17:14.679 05:12:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.679 05:12:51 -- common/autotest_common.sh@10 -- # set +x 00:17:14.679 ************************************ 00:17:14.679 END TEST nvmf_host_management 00:17:14.679 ************************************ 00:17:14.679 05:12:51 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:14.679 00:17:14.679 real 0m8.790s 00:17:14.679 user 0m19.120s 00:17:14.679 sys 0m2.757s 00:17:14.679 05:12:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.679 05:12:51 -- common/autotest_common.sh@10 -- # set +x 00:17:14.679 ************************************ 00:17:14.679 END TEST nvmf_host_management 00:17:14.679 ************************************ 00:17:14.679 05:12:51 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:14.679 05:12:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:14.679 05:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.679 05:12:51 -- common/autotest_common.sh@10 -- # set +x 00:17:14.937 ************************************ 00:17:14.937 START TEST nvmf_lvol 00:17:14.937 ************************************ 00:17:14.937 05:12:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:14.937 * Looking for test storage... 00:17:14.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.937 05:12:52 -- nvmf/common.sh@7 -- # uname -s 00:17:14.937 05:12:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.937 05:12:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.937 05:12:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.937 05:12:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.937 05:12:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.937 05:12:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.937 05:12:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.937 05:12:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.937 05:12:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.937 05:12:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.937 05:12:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.937 05:12:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.937 05:12:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.937 05:12:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.937 05:12:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.937 05:12:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.937 05:12:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.937 05:12:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.937 05:12:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.937 05:12:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.937 05:12:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.937 05:12:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.937 05:12:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.937 05:12:52 -- paths/export.sh@5 -- # export PATH 00:17:14.937 05:12:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.937 05:12:52 -- nvmf/common.sh@47 -- # : 0 00:17:14.937 05:12:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.937 05:12:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.937 05:12:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.937 05:12:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.937 05:12:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.937 05:12:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.937 05:12:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.937 05:12:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.937 05:12:52 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:14.937 05:12:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:14.937 05:12:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.937 05:12:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:14.937 05:12:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:14.937 05:12:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:14.937 05:12:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.937 05:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.937 05:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.937 05:12:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:14.937 05:12:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:14.937 05:12:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.937 05:12:52 -- common/autotest_common.sh@10 -- # set +x 00:17:16.836 05:12:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.836 05:12:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.836 05:12:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.836 05:12:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.836 05:12:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.836 05:12:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.836 05:12:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.836 05:12:53 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.836 05:12:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.836 05:12:53 -- nvmf/common.sh@296 -- # e810=() 00:17:16.836 05:12:53 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.836 05:12:53 -- nvmf/common.sh@297 -- # x722=() 00:17:16.836 05:12:53 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.836 05:12:53 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.836 05:12:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.836 05:12:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.836 05:12:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.836 05:12:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.836 05:12:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.836 05:12:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.836 05:12:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.837 05:12:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.837 05:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.837 05:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.837 05:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.837 05:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.837 05:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.837 05:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.837 05:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.837 05:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.837 05:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.837 05:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.837 05:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.837 05:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.837 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.837 05:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.837 05:12:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.837 05:12:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:16.837 05:12:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:16.837 05:12:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.837 05:12:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.837 05:12:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.837 05:12:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.837 05:12:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.837 05:12:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.837 05:12:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.837 05:12:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.837 05:12:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.837 05:12:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.837 05:12:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.837 05:12:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.837 05:12:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.837 05:12:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.837 05:12:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.837 05:12:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.837 05:12:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.837 05:12:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.837 05:12:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.837 05:12:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:17:16.837 00:17:16.837 --- 10.0.0.2 ping statistics --- 00:17:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.837 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:16.837 05:12:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:17:16.837 00:17:16.837 --- 10.0.0.1 ping statistics --- 00:17:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.837 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:16.837 05:12:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.837 05:12:54 -- nvmf/common.sh@411 -- # return 0 00:17:16.837 05:12:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.837 05:12:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.837 05:12:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.837 05:12:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.837 05:12:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.837 05:12:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.837 05:12:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:17.095 05:12:54 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:17.095 05:12:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:17.095 05:12:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.095 05:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.095 05:12:54 -- nvmf/common.sh@470 -- # nvmfpid=1871996 00:17:17.095 05:12:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:17.095 05:12:54 -- nvmf/common.sh@471 -- # waitforlisten 1871996 00:17:17.095 05:12:54 -- common/autotest_common.sh@817 -- # '[' -z 1871996 ']' 00:17:17.095 05:12:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.095 05:12:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.095 05:12:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.095 05:12:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.095 05:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.095 [2024-04-24 05:12:54.157960] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:17.095 [2024-04-24 05:12:54.158054] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.095 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.095 [2024-04-24 05:12:54.196232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:17.095 [2024-04-24 05:12:54.225588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.095 [2024-04-24 05:12:54.315084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.095 [2024-04-24 05:12:54.315136] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.095 [2024-04-24 05:12:54.315175] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.095 [2024-04-24 05:12:54.315190] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.095 [2024-04-24 05:12:54.315203] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.095 [2024-04-24 05:12:54.315299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.095 [2024-04-24 05:12:54.315350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.095 [2024-04-24 05:12:54.315367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.353 05:12:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.353 05:12:54 -- common/autotest_common.sh@850 -- # return 0 00:17:17.353 05:12:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:17.353 05:12:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.353 05:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.353 05:12:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.353 05:12:54 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:17.611 [2024-04-24 05:12:54.670835] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.611 05:12:54 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.868 05:12:54 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:17.868 05:12:54 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.125 05:12:55 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:18.125 05:12:55 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:18.381 05:12:55 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:18.640 05:12:55 -- target/nvmf_lvol.sh@29 -- # lvs=3f5a6389-b4a8-41a5-9db0-84df2a2bcea9 00:17:18.640 05:12:55 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f5a6389-b4a8-41a5-9db0-84df2a2bcea9 lvol 20 00:17:18.898 05:12:55 -- target/nvmf_lvol.sh@32 -- # lvol=82ba978c-939b-4f80-a2ed-374df5dbb3b7 00:17:18.898 05:12:55 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:19.156 05:12:56 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82ba978c-939b-4f80-a2ed-374df5dbb3b7 00:17:19.413 05:12:56 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:19.671 [2024-04-24 05:12:56.684966] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.671 05:12:56 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:19.928 05:12:56 -- target/nvmf_lvol.sh@42 -- # perf_pid=1872299 00:17:19.928 05:12:56 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:19.928 05:12:56 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:19.928 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.861 05:12:57 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 82ba978c-939b-4f80-a2ed-374df5dbb3b7 MY_SNAPSHOT 00:17:21.120 05:12:58 -- target/nvmf_lvol.sh@47 -- # snapshot=95e8277d-52c6-4450-822e-983d4782e534 00:17:21.120 05:12:58 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 82ba978c-939b-4f80-a2ed-374df5dbb3b7 30 00:17:21.378 05:12:58 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 95e8277d-52c6-4450-822e-983d4782e534 MY_CLONE 00:17:21.636 05:12:58 -- target/nvmf_lvol.sh@49 -- # clone=846f4dde-8336-43e7-8788-0f264db7ee4a 00:17:21.636 05:12:58 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 846f4dde-8336-43e7-8788-0f264db7ee4a 00:17:22.202 05:12:59 -- target/nvmf_lvol.sh@53 -- # wait 1872299 00:17:30.337 Initializing NVMe Controllers 00:17:30.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:30.337 Controller IO queue size 128, less than required. 00:17:30.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:30.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:30.337 Initialization complete. Launching workers. 00:17:30.337 ======================================================== 00:17:30.337 Latency(us) 00:17:30.337 Device Information : IOPS MiB/s Average min max 00:17:30.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10381.12 40.55 12333.51 538.28 84025.17 00:17:30.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10437.92 40.77 12269.56 2215.32 66827.99 00:17:30.337 ======================================================== 00:17:30.337 Total : 20819.05 81.32 12301.45 538.28 84025.17 00:17:30.337 00:17:30.337 05:13:07 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:30.595 05:13:07 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82ba978c-939b-4f80-a2ed-374df5dbb3b7 00:17:30.852 05:13:07 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f5a6389-b4a8-41a5-9db0-84df2a2bcea9 00:17:31.110 05:13:08 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:31.110 05:13:08 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:31.110 05:13:08 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:31.110 05:13:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:31.110 05:13:08 -- nvmf/common.sh@117 -- # sync 00:17:31.110 05:13:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.110 05:13:08 -- nvmf/common.sh@120 -- # set +e 00:17:31.110 05:13:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.110 05:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.110 rmmod nvme_tcp 00:17:31.110 rmmod nvme_fabrics 00:17:31.110 rmmod nvme_keyring 00:17:31.110 05:13:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.110 05:13:08 -- nvmf/common.sh@124 -- # set -e 00:17:31.110 05:13:08 -- nvmf/common.sh@125 -- # return 0 00:17:31.110 05:13:08 -- nvmf/common.sh@478 -- # '[' -n 1871996 ']' 00:17:31.110 05:13:08 -- nvmf/common.sh@479 -- # killprocess 1871996 00:17:31.110 05:13:08 -- common/autotest_common.sh@936 -- # '[' -z 1871996 ']' 00:17:31.110 05:13:08 -- common/autotest_common.sh@940 -- # kill -0 1871996 00:17:31.110 05:13:08 -- common/autotest_common.sh@941 -- # uname 00:17:31.110 05:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.110 05:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1871996 00:17:31.110 05:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.110 05:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.110 05:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1871996' 00:17:31.110 killing process with pid 1871996 00:17:31.110 05:13:08 -- common/autotest_common.sh@955 -- # kill 1871996 00:17:31.110 05:13:08 -- common/autotest_common.sh@960 -- # wait 1871996 00:17:31.368 05:13:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:31.368 05:13:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:31.368 05:13:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:31.368 05:13:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.368 05:13:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.368 05:13:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.368 05:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.368 05:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.901 05:13:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.901 00:17:33.901 real 0m18.656s 00:17:33.901 user 1m3.388s 00:17:33.901 sys 0m5.693s 00:17:33.901 05:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.901 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.901 ************************************ 00:17:33.901 END TEST nvmf_lvol 00:17:33.901 ************************************ 00:17:33.901 05:13:10 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:33.901 05:13:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:33.901 05:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.901 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.901 ************************************ 00:17:33.901 START TEST nvmf_lvs_grow 00:17:33.901 ************************************ 00:17:33.901 05:13:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:33.901 * Looking for test storage... 00:17:33.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.901 05:13:10 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.901 05:13:10 -- nvmf/common.sh@7 -- # uname -s 00:17:33.901 05:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.901 05:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.901 05:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.901 05:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.901 05:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.901 05:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.901 05:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.901 05:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.901 05:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.901 05:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.901 05:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.901 05:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.901 05:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.901 05:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.901 05:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.901 05:13:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.901 05:13:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.901 05:13:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.901 05:13:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.901 05:13:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.901 05:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.901 05:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.901 05:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.901 05:13:10 -- paths/export.sh@5 -- # export PATH 00:17:33.901 05:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.901 05:13:10 -- nvmf/common.sh@47 -- # : 0 00:17:33.901 05:13:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.901 05:13:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.901 05:13:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.901 05:13:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.901 05:13:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.901 05:13:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.901 05:13:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.901 05:13:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.901 05:13:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.901 05:13:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.901 05:13:10 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:33.901 05:13:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:33.901 05:13:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.901 05:13:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:33.901 05:13:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:33.901 05:13:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:33.901 05:13:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.901 05:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.901 05:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.901 05:13:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:33.901 05:13:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:33.901 05:13:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.901 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:35.822 05:13:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.822 05:13:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.822 05:13:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.822 05:13:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.822 05:13:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.822 05:13:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.822 05:13:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.822 05:13:12 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.822 05:13:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.822 05:13:12 -- nvmf/common.sh@296 -- # e810=() 00:17:35.822 05:13:12 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.822 05:13:12 -- nvmf/common.sh@297 -- # x722=() 00:17:35.822 05:13:12 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.822 05:13:12 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.822 05:13:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.822 05:13:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.822 05:13:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.822 05:13:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.822 05:13:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.822 05:13:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.822 05:13:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.822 05:13:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.822 05:13:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.822 05:13:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.822 05:13:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.822 05:13:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.822 05:13:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.822 05:13:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.822 05:13:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.822 05:13:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.822 05:13:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.823 05:13:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.823 05:13:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.823 05:13:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.823 05:13:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.823 05:13:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.823 05:13:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.823 05:13:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.823 05:13:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.823 05:13:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.823 05:13:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.823 05:13:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.823 05:13:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.823 05:13:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.823 05:13:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.823 05:13:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.823 05:13:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.823 05:13:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.823 05:13:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.823 05:13:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.823 05:13:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.823 05:13:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.823 05:13:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.823 05:13:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.823 05:13:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.823 05:13:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.823 05:13:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.823 05:13:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.823 05:13:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.823 05:13:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.823 05:13:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.823 05:13:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:17:35.823 00:17:35.823 --- 10.0.0.2 ping statistics --- 00:17:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.823 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:35.823 05:13:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:17:35.823 00:17:35.823 --- 10.0.0.1 ping statistics --- 00:17:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.823 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:35.823 05:13:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.823 05:13:12 -- nvmf/common.sh@411 -- # return 0 00:17:35.823 05:13:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.823 05:13:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.823 05:13:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.823 05:13:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.823 05:13:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.823 05:13:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.823 05:13:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.823 05:13:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:35.823 05:13:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.823 05:13:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.823 05:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:35.823 05:13:12 -- nvmf/common.sh@470 -- # nvmfpid=1875566 00:17:35.823 05:13:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:35.823 05:13:12 -- nvmf/common.sh@471 -- # waitforlisten 1875566 00:17:35.823 05:13:12 -- common/autotest_common.sh@817 -- # '[' -z 1875566 ']' 00:17:35.823 05:13:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.823 05:13:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.823 05:13:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.823 05:13:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.823 05:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:35.823 [2024-04-24 05:13:13.005892] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:35.823 [2024-04-24 05:13:13.005971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.823 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.823 [2024-04-24 05:13:13.049468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:35.823 [2024-04-24 05:13:13.081308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.081 [2024-04-24 05:13:13.175601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.081 [2024-04-24 05:13:13.175689] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.081 [2024-04-24 05:13:13.175719] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.081 [2024-04-24 05:13:13.175731] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.081 [2024-04-24 05:13:13.175740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.081 [2024-04-24 05:13:13.175777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.081 05:13:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.081 05:13:13 -- common/autotest_common.sh@850 -- # return 0 00:17:36.081 05:13:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.081 05:13:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.081 05:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:36.081 05:13:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.081 05:13:13 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:36.338 [2024-04-24 05:13:13.552844] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.338 05:13:13 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:36.338 05:13:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:36.338 05:13:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.338 05:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:36.596 ************************************ 00:17:36.596 START TEST lvs_grow_clean 00:17:36.596 ************************************ 00:17:36.596 05:13:13 -- common/autotest_common.sh@1111 -- # lvs_grow 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.596 05:13:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.853 05:13:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:36.853 05:13:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:37.110 05:13:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:37.110 05:13:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:37.110 05:13:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:37.368 05:13:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:37.368 05:13:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:37.368 05:13:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 lvol 150 00:17:37.626 05:13:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=96597b1c-45be-4a2c-9355-b3dd6dd689bb 00:17:37.626 05:13:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.626 05:13:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:37.883 [2024-04-24 05:13:14.923876] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:37.883 [2024-04-24 05:13:14.923993] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:37.883 true 00:17:37.883 05:13:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:37.883 05:13:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:38.142 05:13:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:38.142 05:13:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:38.401 05:13:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96597b1c-45be-4a2c-9355-b3dd6dd689bb 00:17:38.660 05:13:15 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:38.918 [2024-04-24 05:13:15.930983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.918 05:13:15 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.918 05:13:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1876013 00:17:38.918 05:13:16 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:38.918 05:13:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.918 05:13:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1876013 /var/tmp/bdevperf.sock 00:17:38.918 05:13:16 -- common/autotest_common.sh@817 -- # '[' -z 1876013 ']' 00:17:38.918 05:13:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.918 05:13:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.918 05:13:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.918 05:13:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.918 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:17:39.177 [2024-04-24 05:13:16.229500] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:39.177 [2024-04-24 05:13:16.229590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876013 ] 00:17:39.177 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.177 [2024-04-24 05:13:16.265793] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:39.177 [2024-04-24 05:13:16.295598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.177 [2024-04-24 05:13:16.384508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.436 05:13:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:39.436 05:13:16 -- common/autotest_common.sh@850 -- # return 0 00:17:39.436 05:13:16 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:39.695 Nvme0n1 00:17:39.953 05:13:16 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:40.212 [ 00:17:40.212 { 00:17:40.212 "name": "Nvme0n1", 00:17:40.212 "aliases": [ 00:17:40.212 "96597b1c-45be-4a2c-9355-b3dd6dd689bb" 00:17:40.212 ], 00:17:40.212 "product_name": "NVMe disk", 00:17:40.212 "block_size": 4096, 00:17:40.212 "num_blocks": 38912, 00:17:40.212 "uuid": "96597b1c-45be-4a2c-9355-b3dd6dd689bb", 00:17:40.212 "assigned_rate_limits": { 00:17:40.212 "rw_ios_per_sec": 0, 00:17:40.212 "rw_mbytes_per_sec": 0, 00:17:40.212 "r_mbytes_per_sec": 0, 00:17:40.212 "w_mbytes_per_sec": 0 00:17:40.212 }, 00:17:40.212 "claimed": false, 00:17:40.212 "zoned": false, 00:17:40.212 "supported_io_types": { 00:17:40.212 "read": true, 00:17:40.212 "write": true, 00:17:40.212 "unmap": true, 00:17:40.212 "write_zeroes": true, 00:17:40.212 "flush": true, 00:17:40.212 "reset": true, 00:17:40.212 "compare": true, 00:17:40.212 "compare_and_write": true, 00:17:40.212 "abort": true, 00:17:40.212 "nvme_admin": true, 00:17:40.212 "nvme_io": true 00:17:40.212 }, 00:17:40.212 "memory_domains": [ 00:17:40.212 { 00:17:40.212 "dma_device_id": "system", 00:17:40.212 "dma_device_type": 1 00:17:40.212 } 00:17:40.212 ], 00:17:40.212 "driver_specific": { 00:17:40.212 "nvme": [ 00:17:40.212 { 00:17:40.212 "trid": { 00:17:40.212 "trtype": "TCP", 00:17:40.212 "adrfam": "IPv4", 00:17:40.212 "traddr": "10.0.0.2", 00:17:40.212 "trsvcid": "4420", 00:17:40.212 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:40.212 }, 00:17:40.212 "ctrlr_data": { 00:17:40.212 "cntlid": 1, 00:17:40.212 "vendor_id": "0x8086", 00:17:40.212 "model_number": "SPDK bdev Controller", 00:17:40.212 "serial_number": "SPDK0", 00:17:40.212 "firmware_revision": "24.05", 00:17:40.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.212 "oacs": { 00:17:40.212 "security": 0, 00:17:40.212 "format": 0, 00:17:40.212 "firmware": 0, 00:17:40.212 "ns_manage": 0 00:17:40.212 }, 00:17:40.212 "multi_ctrlr": true, 00:17:40.212 "ana_reporting": false 00:17:40.212 }, 00:17:40.212 "vs": { 00:17:40.212 "nvme_version": "1.3" 00:17:40.212 }, 00:17:40.212 "ns_data": { 00:17:40.212 "id": 1, 00:17:40.212 "can_share": true 00:17:40.212 } 00:17:40.212 } 00:17:40.212 ], 00:17:40.212 "mp_policy": "active_passive" 00:17:40.212 } 00:17:40.212 } 00:17:40.212 ] 00:17:40.212 05:13:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1876149 00:17:40.212 05:13:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:40.212 05:13:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.212 Running I/O for 10 seconds... 00:17:41.147 Latency(us) 00:17:41.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.147 Nvme0n1 : 1.00 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:17:41.147 =================================================================================================================== 00:17:41.147 Total : 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:17:41.147 00:17:42.090 05:13:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:42.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.090 Nvme0n1 : 2.00 14557.50 56.87 0.00 0.00 0.00 0.00 0.00 00:17:42.090 =================================================================================================================== 00:17:42.090 Total : 14557.50 56.87 0.00 0.00 0.00 0.00 0.00 00:17:42.090 00:17:42.349 true 00:17:42.349 05:13:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:42.349 05:13:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:42.607 05:13:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:42.607 05:13:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:42.607 05:13:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 1876149 00:17:43.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.181 Nvme0n1 : 3.00 14710.00 57.46 0.00 0.00 0.00 0.00 0.00 00:17:43.181 =================================================================================================================== 00:17:43.181 Total : 14710.00 57.46 0.00 0.00 0.00 0.00 0.00 00:17:43.181 00:17:44.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.114 Nvme0n1 : 4.00 14703.50 57.44 0.00 0.00 0.00 0.00 0.00 00:17:44.114 =================================================================================================================== 00:17:44.114 Total : 14703.50 57.44 0.00 0.00 0.00 0.00 0.00 00:17:44.114 00:17:45.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.486 Nvme0n1 : 5.00 14829.40 57.93 0.00 0.00 0.00 0.00 0.00 00:17:45.486 =================================================================================================================== 00:17:45.486 Total : 14829.40 57.93 0.00 0.00 0.00 0.00 0.00 00:17:45.486 00:17:46.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.422 Nvme0n1 : 6.00 14855.00 58.03 0.00 0.00 0.00 0.00 0.00 00:17:46.422 =================================================================================================================== 00:17:46.422 Total : 14855.00 58.03 0.00 0.00 0.00 0.00 0.00 00:17:46.422 00:17:47.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.357 Nvme0n1 : 7.00 14885.57 58.15 0.00 0.00 0.00 0.00 0.00 00:17:47.357 =================================================================================================================== 00:17:47.357 Total : 14885.57 58.15 0.00 0.00 0.00 0.00 0.00 00:17:47.357 00:17:48.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.292 Nvme0n1 : 8.00 14950.88 58.40 0.00 0.00 0.00 0.00 0.00 00:17:48.292 =================================================================================================================== 00:17:48.292 Total : 14950.88 58.40 0.00 0.00 0.00 0.00 0.00 00:17:48.292 00:17:49.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.225 Nvme0n1 : 9.00 14942.67 58.37 0.00 0.00 0.00 0.00 0.00 00:17:49.225 =================================================================================================================== 00:17:49.225 Total : 14942.67 58.37 0.00 0.00 0.00 0.00 0.00 00:17:49.225 00:17:50.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.158 Nvme0n1 : 10.00 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:17:50.158 =================================================================================================================== 00:17:50.158 Total : 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:17:50.158 00:17:50.158 00:17:50.158 Latency(us) 00:17:50.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.158 Nvme0n1 : 10.01 14996.62 58.58 0.00 0.00 8530.61 2208.81 16990.81 00:17:50.158 =================================================================================================================== 00:17:50.158 Total : 14996.62 58.58 0.00 0.00 8530.61 2208.81 16990.81 00:17:50.158 0 00:17:50.158 05:13:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1876013 00:17:50.158 05:13:27 -- common/autotest_common.sh@936 -- # '[' -z 1876013 ']' 00:17:50.158 05:13:27 -- common/autotest_common.sh@940 -- # kill -0 1876013 00:17:50.158 05:13:27 -- common/autotest_common.sh@941 -- # uname 00:17:50.158 05:13:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.158 05:13:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1876013 00:17:50.158 05:13:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.158 05:13:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.158 05:13:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1876013' 00:17:50.158 killing process with pid 1876013 00:17:50.158 05:13:27 -- common/autotest_common.sh@955 -- # kill 1876013 00:17:50.158 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.158 00:17:50.158 Latency(us) 00:17:50.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.158 =================================================================================================================== 00:17:50.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.158 05:13:27 -- common/autotest_common.sh@960 -- # wait 1876013 00:17:50.415 05:13:27 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:50.672 05:13:27 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:50.672 05:13:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:50.929 05:13:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:50.929 05:13:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:50.929 05:13:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:51.187 [2024-04-24 05:13:28.399503] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:51.187 05:13:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:51.187 05:13:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:51.187 05:13:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:51.187 05:13:28 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.187 05:13:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.187 05:13:28 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.187 05:13:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.187 05:13:28 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.187 05:13:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.187 05:13:28 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.187 05:13:28 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:51.187 05:13:28 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:51.444 request: 00:17:51.444 { 00:17:51.444 "uuid": "6ddbd448-dde3-466e-a75c-ec3f8262b462", 00:17:51.444 "method": "bdev_lvol_get_lvstores", 00:17:51.444 "req_id": 1 00:17:51.444 } 00:17:51.444 Got JSON-RPC error response 00:17:51.444 response: 00:17:51.444 { 00:17:51.444 "code": -19, 00:17:51.444 "message": "No such device" 00:17:51.444 } 00:17:51.701 05:13:28 -- common/autotest_common.sh@641 -- # es=1 00:17:51.701 05:13:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:51.701 05:13:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:51.701 05:13:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:51.701 05:13:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.701 aio_bdev 00:17:51.701 05:13:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 96597b1c-45be-4a2c-9355-b3dd6dd689bb 00:17:51.701 05:13:28 -- common/autotest_common.sh@885 -- # local bdev_name=96597b1c-45be-4a2c-9355-b3dd6dd689bb 00:17:51.701 05:13:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:51.701 05:13:28 -- common/autotest_common.sh@887 -- # local i 00:17:51.701 05:13:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:51.701 05:13:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:51.701 05:13:28 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:51.958 05:13:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96597b1c-45be-4a2c-9355-b3dd6dd689bb -t 2000 00:17:52.524 [ 00:17:52.524 { 00:17:52.524 "name": "96597b1c-45be-4a2c-9355-b3dd6dd689bb", 00:17:52.524 "aliases": [ 00:17:52.524 "lvs/lvol" 00:17:52.524 ], 00:17:52.524 "product_name": "Logical Volume", 00:17:52.524 "block_size": 4096, 00:17:52.524 "num_blocks": 38912, 00:17:52.524 "uuid": "96597b1c-45be-4a2c-9355-b3dd6dd689bb", 00:17:52.524 "assigned_rate_limits": { 00:17:52.524 "rw_ios_per_sec": 0, 00:17:52.524 "rw_mbytes_per_sec": 0, 00:17:52.524 "r_mbytes_per_sec": 0, 00:17:52.524 "w_mbytes_per_sec": 0 00:17:52.524 }, 00:17:52.524 "claimed": false, 00:17:52.524 "zoned": false, 00:17:52.524 "supported_io_types": { 00:17:52.524 "read": true, 00:17:52.524 "write": true, 00:17:52.524 "unmap": true, 00:17:52.524 "write_zeroes": true, 00:17:52.524 "flush": false, 00:17:52.524 "reset": true, 00:17:52.524 "compare": false, 00:17:52.524 "compare_and_write": false, 00:17:52.524 "abort": false, 00:17:52.524 "nvme_admin": false, 00:17:52.524 "nvme_io": false 00:17:52.524 }, 00:17:52.524 "driver_specific": { 00:17:52.524 "lvol": { 00:17:52.524 "lvol_store_uuid": "6ddbd448-dde3-466e-a75c-ec3f8262b462", 00:17:52.524 "base_bdev": "aio_bdev", 00:17:52.524 "thin_provision": false, 00:17:52.524 "snapshot": false, 00:17:52.524 "clone": false, 00:17:52.524 "esnap_clone": false 00:17:52.524 } 00:17:52.524 } 00:17:52.524 } 00:17:52.524 ] 00:17:52.524 05:13:29 -- common/autotest_common.sh@893 -- # return 0 00:17:52.524 05:13:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:52.524 05:13:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:52.524 05:13:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:52.524 05:13:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:52.524 05:13:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:52.782 05:13:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:52.782 05:13:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96597b1c-45be-4a2c-9355-b3dd6dd689bb 00:17:53.040 05:13:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ddbd448-dde3-466e-a75c-ec3f8262b462 00:17:53.298 05:13:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:53.556 05:13:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.556 00:17:53.556 real 0m17.094s 00:17:53.556 user 0m16.612s 00:17:53.556 sys 0m1.862s 00:17:53.556 05:13:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.556 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.556 ************************************ 00:17:53.556 END TEST lvs_grow_clean 00:17:53.556 ************************************ 00:17:53.556 05:13:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:53.556 05:13:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.556 05:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.556 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.814 ************************************ 00:17:53.814 START TEST lvs_grow_dirty 00:17:53.814 ************************************ 00:17:53.814 05:13:30 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.814 05:13:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:54.073 05:13:31 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:54.073 05:13:31 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:54.331 05:13:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d9eb772-2729-418b-a4a6-32758eb09d37 00:17:54.331 05:13:31 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:17:54.331 05:13:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:54.607 05:13:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:54.607 05:13:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:54.607 05:13:31 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d9eb772-2729-418b-a4a6-32758eb09d37 lvol 150 00:17:54.881 05:13:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=12f13521-f49d-4bef-b4e0-fb782542f289 00:17:54.881 05:13:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:54.881 05:13:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:55.139 [2024-04-24 05:13:32.198821] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:55.139 [2024-04-24 05:13:32.198907] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:55.139 true 00:17:55.139 05:13:32 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:17:55.139 05:13:32 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:55.397 05:13:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:55.397 05:13:32 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:55.655 05:13:32 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12f13521-f49d-4bef-b4e0-fb782542f289 00:17:55.913 05:13:33 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:56.170 05:13:33 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:56.428 05:13:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1878067 00:17:56.428 05:13:33 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:56.428 05:13:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.428 05:13:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1878067 /var/tmp/bdevperf.sock 00:17:56.428 05:13:33 -- common/autotest_common.sh@817 -- # '[' -z 1878067 ']' 00:17:56.428 05:13:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.428 05:13:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.428 05:13:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.428 05:13:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.428 05:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:56.428 [2024-04-24 05:13:33.577839] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:17:56.428 [2024-04-24 05:13:33.577927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878067 ] 00:17:56.428 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.428 [2024-04-24 05:13:33.609216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:56.428 [2024-04-24 05:13:33.639438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.685 [2024-04-24 05:13:33.729896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.685 05:13:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:56.685 05:13:33 -- common/autotest_common.sh@850 -- # return 0 00:17:56.685 05:13:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:57.250 Nvme0n1 00:17:57.250 05:13:34 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:57.250 [ 00:17:57.250 { 00:17:57.250 "name": "Nvme0n1", 00:17:57.250 "aliases": [ 00:17:57.250 "12f13521-f49d-4bef-b4e0-fb782542f289" 00:17:57.250 ], 00:17:57.250 "product_name": "NVMe disk", 00:17:57.250 "block_size": 4096, 00:17:57.250 "num_blocks": 38912, 00:17:57.250 "uuid": "12f13521-f49d-4bef-b4e0-fb782542f289", 00:17:57.250 "assigned_rate_limits": { 00:17:57.250 "rw_ios_per_sec": 0, 00:17:57.250 "rw_mbytes_per_sec": 0, 00:17:57.250 "r_mbytes_per_sec": 0, 00:17:57.250 "w_mbytes_per_sec": 0 00:17:57.250 }, 00:17:57.250 "claimed": false, 00:17:57.250 "zoned": false, 00:17:57.250 "supported_io_types": { 00:17:57.250 "read": true, 00:17:57.250 "write": true, 00:17:57.250 "unmap": true, 00:17:57.250 "write_zeroes": true, 00:17:57.250 "flush": true, 00:17:57.250 "reset": true, 00:17:57.250 "compare": true, 00:17:57.250 "compare_and_write": true, 00:17:57.250 "abort": true, 00:17:57.251 "nvme_admin": true, 00:17:57.251 "nvme_io": true 00:17:57.251 }, 00:17:57.251 "memory_domains": [ 00:17:57.251 { 00:17:57.251 "dma_device_id": "system", 00:17:57.251 "dma_device_type": 1 00:17:57.251 } 00:17:57.251 ], 00:17:57.251 "driver_specific": { 00:17:57.251 "nvme": [ 00:17:57.251 { 00:17:57.251 "trid": { 00:17:57.251 "trtype": "TCP", 00:17:57.251 "adrfam": "IPv4", 00:17:57.251 "traddr": "10.0.0.2", 00:17:57.251 "trsvcid": "4420", 00:17:57.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:57.251 }, 00:17:57.251 "ctrlr_data": { 00:17:57.251 "cntlid": 1, 00:17:57.251 "vendor_id": "0x8086", 00:17:57.251 "model_number": "SPDK bdev Controller", 00:17:57.251 "serial_number": "SPDK0", 00:17:57.251 "firmware_revision": "24.05", 00:17:57.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:57.251 "oacs": { 00:17:57.251 "security": 0, 00:17:57.251 "format": 0, 00:17:57.251 "firmware": 0, 00:17:57.251 "ns_manage": 0 00:17:57.251 }, 00:17:57.251 "multi_ctrlr": true, 00:17:57.251 "ana_reporting": false 00:17:57.251 }, 00:17:57.251 "vs": { 00:17:57.251 "nvme_version": "1.3" 00:17:57.251 }, 00:17:57.251 "ns_data": { 00:17:57.251 "id": 1, 00:17:57.251 "can_share": true 00:17:57.251 } 00:17:57.251 } 00:17:57.251 ], 00:17:57.251 "mp_policy": "active_passive" 00:17:57.251 } 00:17:57.251 } 00:17:57.251 ] 00:17:57.509 05:13:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1878201 00:17:57.509 05:13:34 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:57.509 05:13:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:57.509 Running I/O for 10 seconds... 00:17:58.444 Latency(us) 00:17:58.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.444 Nvme0n1 : 1.00 14021.00 54.77 0.00 0.00 0.00 0.00 0.00 00:17:58.444 =================================================================================================================== 00:17:58.444 Total : 14021.00 54.77 0.00 0.00 0.00 0.00 0.00 00:17:58.444 00:17:59.378 05:13:36 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:17:59.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.378 Nvme0n1 : 2.00 14143.00 55.25 0.00 0.00 0.00 0.00 0.00 00:17:59.378 =================================================================================================================== 00:17:59.378 Total : 14143.00 55.25 0.00 0.00 0.00 0.00 0.00 00:17:59.378 00:17:59.637 true 00:17:59.637 05:13:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:17:59.637 05:13:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:59.895 05:13:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:59.895 05:13:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:59.895 05:13:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 1878201 00:18:00.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.461 Nvme0n1 : 3.00 14245.33 55.65 0.00 0.00 0.00 0.00 0.00 00:18:00.461 =================================================================================================================== 00:18:00.461 Total : 14245.33 55.65 0.00 0.00 0.00 0.00 0.00 00:18:00.461 00:18:01.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.395 Nvme0n1 : 4.00 14329.00 55.97 0.00 0.00 0.00 0.00 0.00 00:18:01.395 =================================================================================================================== 00:18:01.395 Total : 14329.00 55.97 0.00 0.00 0.00 0.00 0.00 00:18:01.395 00:18:02.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.768 Nvme0n1 : 5.00 14394.60 56.23 0.00 0.00 0.00 0.00 0.00 00:18:02.768 =================================================================================================================== 00:18:02.768 Total : 14394.60 56.23 0.00 0.00 0.00 0.00 0.00 00:18:02.768 00:18:03.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.700 Nvme0n1 : 6.00 14434.00 56.38 0.00 0.00 0.00 0.00 0.00 00:18:03.700 =================================================================================================================== 00:18:03.700 Total : 14434.00 56.38 0.00 0.00 0.00 0.00 0.00 00:18:03.700 00:18:04.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.633 Nvme0n1 : 7.00 14546.14 56.82 0.00 0.00 0.00 0.00 0.00 00:18:04.633 =================================================================================================================== 00:18:04.633 Total : 14546.14 56.82 0.00 0.00 0.00 0.00 0.00 00:18:04.633 00:18:05.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.566 Nvme0n1 : 8.00 14607.62 57.06 0.00 0.00 0.00 0.00 0.00 00:18:05.566 =================================================================================================================== 00:18:05.566 Total : 14607.62 57.06 0.00 0.00 0.00 0.00 0.00 00:18:05.566 00:18:06.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.499 Nvme0n1 : 9.00 14655.00 57.25 0.00 0.00 0.00 0.00 0.00 00:18:06.499 =================================================================================================================== 00:18:06.499 Total : 14655.00 57.25 0.00 0.00 0.00 0.00 0.00 00:18:06.499 00:18:07.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.433 Nvme0n1 : 10.00 14669.50 57.30 0.00 0.00 0.00 0.00 0.00 00:18:07.433 =================================================================================================================== 00:18:07.433 Total : 14669.50 57.30 0.00 0.00 0.00 0.00 0.00 00:18:07.433 00:18:07.433 00:18:07.433 Latency(us) 00:18:07.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.433 Nvme0n1 : 10.00 14673.57 57.32 0.00 0.00 8717.86 4805.97 16505.36 00:18:07.433 =================================================================================================================== 00:18:07.433 Total : 14673.57 57.32 0.00 0.00 8717.86 4805.97 16505.36 00:18:07.433 0 00:18:07.433 05:13:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1878067 00:18:07.433 05:13:44 -- common/autotest_common.sh@936 -- # '[' -z 1878067 ']' 00:18:07.433 05:13:44 -- common/autotest_common.sh@940 -- # kill -0 1878067 00:18:07.433 05:13:44 -- common/autotest_common.sh@941 -- # uname 00:18:07.433 05:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.433 05:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1878067 00:18:07.433 05:13:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:07.433 05:13:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:07.433 05:13:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1878067' 00:18:07.433 killing process with pid 1878067 00:18:07.433 05:13:44 -- common/autotest_common.sh@955 -- # kill 1878067 00:18:07.433 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.433 00:18:07.433 Latency(us) 00:18:07.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.433 =================================================================================================================== 00:18:07.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.433 05:13:44 -- common/autotest_common.sh@960 -- # wait 1878067 00:18:07.705 05:13:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:08.003 05:13:45 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:08.003 05:13:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:08.261 05:13:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:08.261 05:13:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:08.261 05:13:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1875566 00:18:08.262 05:13:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 1875566 00:18:08.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1875566 Killed "${NVMF_APP[@]}" "$@" 00:18:08.262 05:13:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:08.262 05:13:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:08.262 05:13:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:08.262 05:13:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:08.262 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.262 05:13:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:08.262 05:13:45 -- nvmf/common.sh@470 -- # nvmfpid=1879513 00:18:08.262 05:13:45 -- nvmf/common.sh@471 -- # waitforlisten 1879513 00:18:08.262 05:13:45 -- common/autotest_common.sh@817 -- # '[' -z 1879513 ']' 00:18:08.262 05:13:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.262 05:13:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.262 05:13:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.262 05:13:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.262 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.262 [2024-04-24 05:13:45.503799] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:08.262 [2024-04-24 05:13:45.503880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.520 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.520 [2024-04-24 05:13:45.544911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:08.520 [2024-04-24 05:13:45.571172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.520 [2024-04-24 05:13:45.658068] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.520 [2024-04-24 05:13:45.658128] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.520 [2024-04-24 05:13:45.658142] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.520 [2024-04-24 05:13:45.658154] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.520 [2024-04-24 05:13:45.658164] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.520 [2024-04-24 05:13:45.658221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.520 05:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.520 05:13:45 -- common/autotest_common.sh@850 -- # return 0 00:18:08.520 05:13:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:08.521 05:13:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:08.521 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:18:08.779 05:13:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.779 05:13:45 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:09.037 [2024-04-24 05:13:46.057260] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:09.037 [2024-04-24 05:13:46.057385] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:09.037 [2024-04-24 05:13:46.057431] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:09.037 05:13:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:09.037 05:13:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 12f13521-f49d-4bef-b4e0-fb782542f289 00:18:09.037 05:13:46 -- common/autotest_common.sh@885 -- # local bdev_name=12f13521-f49d-4bef-b4e0-fb782542f289 00:18:09.037 05:13:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:09.037 05:13:46 -- common/autotest_common.sh@887 -- # local i 00:18:09.037 05:13:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:09.037 05:13:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:09.037 05:13:46 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.296 05:13:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12f13521-f49d-4bef-b4e0-fb782542f289 -t 2000 00:18:09.296 [ 00:18:09.296 { 00:18:09.296 "name": "12f13521-f49d-4bef-b4e0-fb782542f289", 00:18:09.296 "aliases": [ 00:18:09.296 "lvs/lvol" 00:18:09.296 ], 00:18:09.296 "product_name": "Logical Volume", 00:18:09.296 "block_size": 4096, 00:18:09.296 "num_blocks": 38912, 00:18:09.296 "uuid": "12f13521-f49d-4bef-b4e0-fb782542f289", 00:18:09.296 "assigned_rate_limits": { 00:18:09.296 "rw_ios_per_sec": 0, 00:18:09.296 "rw_mbytes_per_sec": 0, 00:18:09.296 "r_mbytes_per_sec": 0, 00:18:09.296 "w_mbytes_per_sec": 0 00:18:09.296 }, 00:18:09.296 "claimed": false, 00:18:09.296 "zoned": false, 00:18:09.296 "supported_io_types": { 00:18:09.296 "read": true, 00:18:09.296 "write": true, 00:18:09.296 "unmap": true, 00:18:09.296 "write_zeroes": true, 00:18:09.296 "flush": false, 00:18:09.296 "reset": true, 00:18:09.296 "compare": false, 00:18:09.296 "compare_and_write": false, 00:18:09.296 "abort": false, 00:18:09.296 "nvme_admin": false, 00:18:09.296 "nvme_io": false 00:18:09.296 }, 00:18:09.296 "driver_specific": { 00:18:09.296 "lvol": { 00:18:09.296 "lvol_store_uuid": "6d9eb772-2729-418b-a4a6-32758eb09d37", 00:18:09.296 "base_bdev": "aio_bdev", 00:18:09.296 "thin_provision": false, 00:18:09.296 "snapshot": false, 00:18:09.296 "clone": false, 00:18:09.296 "esnap_clone": false 00:18:09.296 } 00:18:09.296 } 00:18:09.296 } 00:18:09.296 ] 00:18:09.296 05:13:46 -- common/autotest_common.sh@893 -- # return 0 00:18:09.555 05:13:46 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:09.555 05:13:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:09.555 05:13:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:09.555 05:13:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:09.555 05:13:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:09.813 05:13:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:09.813 05:13:47 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:10.071 [2024-04-24 05:13:47.270360] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:10.071 05:13:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:10.071 05:13:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:10.071 05:13:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:10.071 05:13:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.071 05:13:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:10.071 05:13:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.071 05:13:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:10.071 05:13:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.071 05:13:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:10.071 05:13:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.071 05:13:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:10.071 05:13:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:10.329 request: 00:18:10.329 { 00:18:10.329 "uuid": "6d9eb772-2729-418b-a4a6-32758eb09d37", 00:18:10.329 "method": "bdev_lvol_get_lvstores", 00:18:10.329 "req_id": 1 00:18:10.329 } 00:18:10.329 Got JSON-RPC error response 00:18:10.329 response: 00:18:10.329 { 00:18:10.329 "code": -19, 00:18:10.329 "message": "No such device" 00:18:10.329 } 00:18:10.329 05:13:47 -- common/autotest_common.sh@641 -- # es=1 00:18:10.329 05:13:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:10.329 05:13:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:10.329 05:13:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:10.329 05:13:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:10.587 aio_bdev 00:18:10.587 05:13:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 12f13521-f49d-4bef-b4e0-fb782542f289 00:18:10.587 05:13:47 -- common/autotest_common.sh@885 -- # local bdev_name=12f13521-f49d-4bef-b4e0-fb782542f289 00:18:10.587 05:13:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:10.587 05:13:47 -- common/autotest_common.sh@887 -- # local i 00:18:10.587 05:13:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:10.587 05:13:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:10.587 05:13:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:10.845 05:13:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12f13521-f49d-4bef-b4e0-fb782542f289 -t 2000 00:18:11.103 [ 00:18:11.103 { 00:18:11.103 "name": "12f13521-f49d-4bef-b4e0-fb782542f289", 00:18:11.103 "aliases": [ 00:18:11.103 "lvs/lvol" 00:18:11.103 ], 00:18:11.103 "product_name": "Logical Volume", 00:18:11.103 "block_size": 4096, 00:18:11.103 "num_blocks": 38912, 00:18:11.103 "uuid": "12f13521-f49d-4bef-b4e0-fb782542f289", 00:18:11.103 "assigned_rate_limits": { 00:18:11.103 "rw_ios_per_sec": 0, 00:18:11.103 "rw_mbytes_per_sec": 0, 00:18:11.103 "r_mbytes_per_sec": 0, 00:18:11.103 "w_mbytes_per_sec": 0 00:18:11.103 }, 00:18:11.103 "claimed": false, 00:18:11.103 "zoned": false, 00:18:11.103 "supported_io_types": { 00:18:11.103 "read": true, 00:18:11.103 "write": true, 00:18:11.103 "unmap": true, 00:18:11.103 "write_zeroes": true, 00:18:11.103 "flush": false, 00:18:11.103 "reset": true, 00:18:11.103 "compare": false, 00:18:11.103 "compare_and_write": false, 00:18:11.103 "abort": false, 00:18:11.103 "nvme_admin": false, 00:18:11.103 "nvme_io": false 00:18:11.103 }, 00:18:11.103 "driver_specific": { 00:18:11.103 "lvol": { 00:18:11.103 "lvol_store_uuid": "6d9eb772-2729-418b-a4a6-32758eb09d37", 00:18:11.103 "base_bdev": "aio_bdev", 00:18:11.103 "thin_provision": false, 00:18:11.103 "snapshot": false, 00:18:11.103 "clone": false, 00:18:11.103 "esnap_clone": false 00:18:11.103 } 00:18:11.103 } 00:18:11.103 } 00:18:11.103 ] 00:18:11.103 05:13:48 -- common/autotest_common.sh@893 -- # return 0 00:18:11.103 05:13:48 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:11.103 05:13:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:11.361 05:13:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:11.361 05:13:48 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:11.361 05:13:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:11.618 05:13:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:11.618 05:13:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12f13521-f49d-4bef-b4e0-fb782542f289 00:18:11.876 05:13:49 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d9eb772-2729-418b-a4a6-32758eb09d37 00:18:12.134 05:13:49 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:12.392 05:13:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:12.653 00:18:12.653 real 0m18.799s 00:18:12.653 user 0m47.576s 00:18:12.653 sys 0m4.674s 00:18:12.653 05:13:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:12.653 05:13:49 -- common/autotest_common.sh@10 -- # set +x 00:18:12.653 ************************************ 00:18:12.653 END TEST lvs_grow_dirty 00:18:12.653 ************************************ 00:18:12.653 05:13:49 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:12.653 05:13:49 -- common/autotest_common.sh@794 -- # type=--id 00:18:12.653 05:13:49 -- common/autotest_common.sh@795 -- # id=0 00:18:12.653 05:13:49 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:12.653 05:13:49 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.653 05:13:49 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:12.653 05:13:49 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:12.653 05:13:49 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:12.653 05:13:49 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.653 nvmf_trace.0 00:18:12.653 05:13:49 -- common/autotest_common.sh@809 -- # return 0 00:18:12.653 05:13:49 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:12.653 05:13:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:12.653 05:13:49 -- nvmf/common.sh@117 -- # sync 00:18:12.653 05:13:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.653 05:13:49 -- nvmf/common.sh@120 -- # set +e 00:18:12.653 05:13:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.653 05:13:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.653 rmmod nvme_tcp 00:18:12.653 rmmod nvme_fabrics 00:18:12.653 rmmod nvme_keyring 00:18:12.653 05:13:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.653 05:13:49 -- nvmf/common.sh@124 -- # set -e 00:18:12.653 05:13:49 -- nvmf/common.sh@125 -- # return 0 00:18:12.653 05:13:49 -- nvmf/common.sh@478 -- # '[' -n 1879513 ']' 00:18:12.653 05:13:49 -- nvmf/common.sh@479 -- # killprocess 1879513 00:18:12.653 05:13:49 -- common/autotest_common.sh@936 -- # '[' -z 1879513 ']' 00:18:12.653 05:13:49 -- common/autotest_common.sh@940 -- # kill -0 1879513 00:18:12.653 05:13:49 -- common/autotest_common.sh@941 -- # uname 00:18:12.653 05:13:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.653 05:13:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1879513 00:18:12.653 05:13:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.653 05:13:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.653 05:13:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1879513' 00:18:12.653 killing process with pid 1879513 00:18:12.653 05:13:49 -- common/autotest_common.sh@955 -- # kill 1879513 00:18:12.653 05:13:49 -- common/autotest_common.sh@960 -- # wait 1879513 00:18:12.912 05:13:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:12.912 05:13:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:12.912 05:13:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:12.912 05:13:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.912 05:13:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.912 05:13:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.912 05:13:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.912 05:13:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.446 05:13:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.446 00:18:15.446 real 0m41.336s 00:18:15.446 user 1m9.928s 00:18:15.446 sys 0m8.451s 00:18:15.446 05:13:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:15.446 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 ************************************ 00:18:15.446 END TEST nvmf_lvs_grow 00:18:15.446 ************************************ 00:18:15.446 05:13:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:15.446 05:13:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.446 05:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.446 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 ************************************ 00:18:15.446 START TEST nvmf_bdev_io_wait 00:18:15.446 ************************************ 00:18:15.446 05:13:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:15.446 * Looking for test storage... 00:18:15.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.446 05:13:52 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.446 05:13:52 -- nvmf/common.sh@7 -- # uname -s 00:18:15.446 05:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.446 05:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.446 05:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.446 05:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.446 05:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.446 05:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.446 05:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.446 05:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.446 05:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.446 05:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.446 05:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.446 05:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.446 05:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.446 05:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.446 05:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.446 05:13:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.446 05:13:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.446 05:13:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.446 05:13:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.446 05:13:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.446 05:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.446 05:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.446 05:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.446 05:13:52 -- paths/export.sh@5 -- # export PATH 00:18:15.446 05:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.446 05:13:52 -- nvmf/common.sh@47 -- # : 0 00:18:15.446 05:13:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.446 05:13:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.446 05:13:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.446 05:13:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.446 05:13:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.446 05:13:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.446 05:13:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.446 05:13:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.446 05:13:52 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.446 05:13:52 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.446 05:13:52 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:15.446 05:13:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:15.446 05:13:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.446 05:13:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:15.446 05:13:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:15.446 05:13:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:15.446 05:13:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.446 05:13:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.446 05:13:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.446 05:13:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:15.446 05:13:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:15.446 05:13:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.446 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:18:17.351 05:13:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:17.351 05:13:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.351 05:13:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.351 05:13:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.351 05:13:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.351 05:13:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.351 05:13:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.351 05:13:54 -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.351 05:13:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.351 05:13:54 -- nvmf/common.sh@296 -- # e810=() 00:18:17.351 05:13:54 -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.351 05:13:54 -- nvmf/common.sh@297 -- # x722=() 00:18:17.351 05:13:54 -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.351 05:13:54 -- nvmf/common.sh@298 -- # mlx=() 00:18:17.351 05:13:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.351 05:13:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.351 05:13:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.351 05:13:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.351 05:13:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.351 05:13:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.351 05:13:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.351 05:13:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.351 05:13:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.351 05:13:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.351 05:13:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.351 05:13:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.351 05:13:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.351 05:13:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.351 05:13:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.351 05:13:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.351 05:13:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.351 05:13:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.351 05:13:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.352 05:13:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.352 05:13:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.352 05:13:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.352 05:13:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.352 05:13:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:17.352 05:13:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:17.352 05:13:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:17.352 05:13:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:17.352 05:13:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:17.352 05:13:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.352 05:13:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.352 05:13:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.352 05:13:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.352 05:13:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.352 05:13:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.352 05:13:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.352 05:13:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.352 05:13:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.352 05:13:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.352 05:13:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.352 05:13:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.352 05:13:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.352 05:13:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.352 05:13:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.352 05:13:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.352 05:13:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.352 05:13:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.352 05:13:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.352 05:13:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:18:17.352 00:18:17.352 --- 10.0.0.2 ping statistics --- 00:18:17.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.352 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:17.352 05:13:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:17.352 00:18:17.352 --- 10.0.0.1 ping statistics --- 00:18:17.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.352 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:17.352 05:13:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.352 05:13:54 -- nvmf/common.sh@411 -- # return 0 00:18:17.352 05:13:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:17.352 05:13:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.352 05:13:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:17.352 05:13:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:17.352 05:13:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.352 05:13:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:17.352 05:13:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:17.352 05:13:54 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:17.352 05:13:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:17.352 05:13:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:17.352 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 05:13:54 -- nvmf/common.sh@470 -- # nvmfpid=1882040 00:18:17.352 05:13:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:17.352 05:13:54 -- nvmf/common.sh@471 -- # waitforlisten 1882040 00:18:17.352 05:13:54 -- common/autotest_common.sh@817 -- # '[' -z 1882040 ']' 00:18:17.352 05:13:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.352 05:13:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.352 05:13:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.352 05:13:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.352 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 [2024-04-24 05:13:54.324042] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:17.352 [2024-04-24 05:13:54.324135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.352 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.352 [2024-04-24 05:13:54.365738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.352 [2024-04-24 05:13:54.396860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.352 [2024-04-24 05:13:54.491597] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.352 [2024-04-24 05:13:54.491674] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.352 [2024-04-24 05:13:54.491691] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.352 [2024-04-24 05:13:54.491705] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.352 [2024-04-24 05:13:54.491717] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.352 [2024-04-24 05:13:54.491781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.352 [2024-04-24 05:13:54.491862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.352 [2024-04-24 05:13:54.495649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.352 [2024-04-24 05:13:54.495661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.352 05:13:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.352 05:13:54 -- common/autotest_common.sh@850 -- # return 0 00:18:17.352 05:13:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:17.352 05:13:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:17.352 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 05:13:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.352 05:13:54 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:17.352 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.352 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.352 05:13:54 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:17.352 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.352 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.611 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.611 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 [2024-04-24 05:13:54.663641] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.611 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.611 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 Malloc0 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.611 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.611 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.611 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.611 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.611 05:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.611 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.611 [2024-04-24 05:13:54.726187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.611 05:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1882069 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@30 -- # READ_PID=1882071 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:17.611 05:13:54 -- nvmf/common.sh@521 -- # config=() 00:18:17.611 05:13:54 -- nvmf/common.sh@521 -- # local subsystem config 00:18:17.611 05:13:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:17.611 05:13:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:17.611 { 00:18:17.611 "params": { 00:18:17.611 "name": "Nvme$subsystem", 00:18:17.611 "trtype": "$TEST_TRANSPORT", 00:18:17.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.611 "adrfam": "ipv4", 00:18:17.611 "trsvcid": "$NVMF_PORT", 00:18:17.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.611 "hdgst": ${hdgst:-false}, 00:18:17.611 "ddgst": ${ddgst:-false} 00:18:17.611 }, 00:18:17.611 "method": "bdev_nvme_attach_controller" 00:18:17.611 } 00:18:17.611 EOF 00:18:17.611 )") 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1882073 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:17.611 05:13:54 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:17.611 05:13:54 -- nvmf/common.sh@521 -- # config=() 00:18:17.611 05:13:54 -- nvmf/common.sh@521 -- # local subsystem config 00:18:17.611 05:13:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:17.611 05:13:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:17.611 { 00:18:17.611 "params": { 00:18:17.611 "name": "Nvme$subsystem", 00:18:17.611 "trtype": "$TEST_TRANSPORT", 00:18:17.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "$NVMF_PORT", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.612 "hdgst": ${hdgst:-false}, 00:18:17.612 "ddgst": ${ddgst:-false} 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 } 00:18:17.612 EOF 00:18:17.612 )") 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1882076 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@35 -- # sync 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # cat 00:18:17.612 05:13:54 -- nvmf/common.sh@521 -- # config=() 00:18:17.612 05:13:54 -- nvmf/common.sh@521 -- # local subsystem config 00:18:17.612 05:13:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:17.612 { 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme$subsystem", 00:18:17.612 "trtype": "$TEST_TRANSPORT", 00:18:17.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "$NVMF_PORT", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.612 "hdgst": ${hdgst:-false}, 00:18:17.612 "ddgst": ${ddgst:-false} 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 } 00:18:17.612 EOF 00:18:17.612 )") 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:17.612 05:13:54 -- nvmf/common.sh@521 -- # config=() 00:18:17.612 05:13:54 -- nvmf/common.sh@521 -- # local subsystem config 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # cat 00:18:17.612 05:13:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:17.612 { 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme$subsystem", 00:18:17.612 "trtype": "$TEST_TRANSPORT", 00:18:17.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "$NVMF_PORT", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.612 "hdgst": ${hdgst:-false}, 00:18:17.612 "ddgst": ${ddgst:-false} 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 } 00:18:17.612 EOF 00:18:17.612 )") 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # cat 00:18:17.612 05:13:54 -- target/bdev_io_wait.sh@37 -- # wait 1882069 00:18:17.612 05:13:54 -- nvmf/common.sh@543 -- # cat 00:18:17.612 05:13:54 -- nvmf/common.sh@545 -- # jq . 00:18:17.612 05:13:54 -- nvmf/common.sh@545 -- # jq . 00:18:17.612 05:13:54 -- nvmf/common.sh@545 -- # jq . 00:18:17.612 05:13:54 -- nvmf/common.sh@546 -- # IFS=, 00:18:17.612 05:13:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme1", 00:18:17.612 "trtype": "tcp", 00:18:17.612 "traddr": "10.0.0.2", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "4420", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.612 "hdgst": false, 00:18:17.612 "ddgst": false 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 }' 00:18:17.612 05:13:54 -- nvmf/common.sh@545 -- # jq . 00:18:17.612 05:13:54 -- nvmf/common.sh@546 -- # IFS=, 00:18:17.612 05:13:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme1", 00:18:17.612 "trtype": "tcp", 00:18:17.612 "traddr": "10.0.0.2", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "4420", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.612 "hdgst": false, 00:18:17.612 "ddgst": false 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 }' 00:18:17.612 05:13:54 -- nvmf/common.sh@546 -- # IFS=, 00:18:17.612 05:13:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme1", 00:18:17.612 "trtype": "tcp", 00:18:17.612 "traddr": "10.0.0.2", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "4420", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.612 "hdgst": false, 00:18:17.612 "ddgst": false 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 }' 00:18:17.612 05:13:54 -- nvmf/common.sh@546 -- # IFS=, 00:18:17.612 05:13:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:17.612 "params": { 00:18:17.612 "name": "Nvme1", 00:18:17.612 "trtype": "tcp", 00:18:17.612 "traddr": "10.0.0.2", 00:18:17.612 "adrfam": "ipv4", 00:18:17.612 "trsvcid": "4420", 00:18:17.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.612 "hdgst": false, 00:18:17.612 "ddgst": false 00:18:17.612 }, 00:18:17.612 "method": "bdev_nvme_attach_controller" 00:18:17.612 }' 00:18:17.612 [2024-04-24 05:13:54.772615] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:17.612 [2024-04-24 05:13:54.772615] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:17.612 [2024-04-24 05:13:54.772616] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:17.612 [2024-04-24 05:13:54.772615] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:17.612 [2024-04-24 05:13:54.772712] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 05:13:54.772713] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 05:13:54.772713] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 05:13:54.772714] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:17.612 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:17.612 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:17.612 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:17.612 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.871 [2024-04-24 05:13:54.910127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.871 [2024-04-24 05:13:54.939963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.871 [2024-04-24 05:13:55.007551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.871 [2024-04-24 05:13:55.012851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.871 [2024-04-24 05:13:55.038718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.871 [2024-04-24 05:13:55.106730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.871 [2024-04-24 05:13:55.111700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:17.871 [2024-04-24 05:13:55.137114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.129 [2024-04-24 05:13:55.213266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:18.129 [2024-04-24 05:13:55.214295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:18.129 [2024-04-24 05:13:55.244747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.129 [2024-04-24 05:13:55.315082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:18.387 Running I/O for 1 seconds... 00:18:18.387 Running I/O for 1 seconds... 00:18:18.387 Running I/O for 1 seconds... 00:18:18.387 Running I/O for 1 seconds... 00:18:19.324 00:18:19.324 Latency(us) 00:18:19.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.324 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:19.324 Nvme1n1 : 1.02 7156.77 27.96 0.00 0.00 17670.02 8058.50 32039.82 00:18:19.324 =================================================================================================================== 00:18:19.324 Total : 7156.77 27.96 0.00 0.00 17670.02 8058.50 32039.82 00:18:19.324 00:18:19.324 Latency(us) 00:18:19.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.324 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:19.324 Nvme1n1 : 1.01 6746.65 26.35 0.00 0.00 18904.42 6407.96 36117.62 00:18:19.324 =================================================================================================================== 00:18:19.324 Total : 6746.65 26.35 0.00 0.00 18904.42 6407.96 36117.62 00:18:19.583 00:18:19.583 Latency(us) 00:18:19.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.583 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:19.583 Nvme1n1 : 1.01 8269.05 32.30 0.00 0.00 15410.30 8204.14 27767.85 00:18:19.583 =================================================================================================================== 00:18:19.583 Total : 8269.05 32.30 0.00 0.00 15410.30 8204.14 27767.85 00:18:19.583 00:18:19.583 Latency(us) 00:18:19.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.583 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:19.583 Nvme1n1 : 1.00 177271.10 692.47 0.00 0.00 719.31 245.76 910.22 00:18:19.583 =================================================================================================================== 00:18:19.583 Total : 177271.10 692.47 0.00 0.00 719.31 245.76 910.22 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@38 -- # wait 1882071 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@39 -- # wait 1882073 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@40 -- # wait 1882076 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.841 05:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.841 05:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:19.841 05:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:19.841 05:13:56 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:19.841 05:13:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:19.841 05:13:56 -- nvmf/common.sh@117 -- # sync 00:18:19.841 05:13:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.841 05:13:56 -- nvmf/common.sh@120 -- # set +e 00:18:19.841 05:13:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.841 05:13:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.841 rmmod nvme_tcp 00:18:19.841 rmmod nvme_fabrics 00:18:19.841 rmmod nvme_keyring 00:18:19.841 05:13:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.841 05:13:56 -- nvmf/common.sh@124 -- # set -e 00:18:19.841 05:13:56 -- nvmf/common.sh@125 -- # return 0 00:18:19.841 05:13:56 -- nvmf/common.sh@478 -- # '[' -n 1882040 ']' 00:18:19.841 05:13:56 -- nvmf/common.sh@479 -- # killprocess 1882040 00:18:19.841 05:13:56 -- common/autotest_common.sh@936 -- # '[' -z 1882040 ']' 00:18:19.841 05:13:56 -- common/autotest_common.sh@940 -- # kill -0 1882040 00:18:19.841 05:13:56 -- common/autotest_common.sh@941 -- # uname 00:18:19.841 05:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.841 05:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1882040 00:18:19.841 05:13:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.841 05:13:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.841 05:13:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1882040' 00:18:19.841 killing process with pid 1882040 00:18:19.841 05:13:57 -- common/autotest_common.sh@955 -- # kill 1882040 00:18:19.841 05:13:57 -- common/autotest_common.sh@960 -- # wait 1882040 00:18:20.099 05:13:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:20.099 05:13:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:20.099 05:13:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:20.100 05:13:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.100 05:13:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.100 05:13:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.100 05:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.100 05:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.056 05:13:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.056 00:18:22.056 real 0m7.048s 00:18:22.056 user 0m16.883s 00:18:22.056 sys 0m3.278s 00:18:22.056 05:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.056 05:13:59 -- common/autotest_common.sh@10 -- # set +x 00:18:22.056 ************************************ 00:18:22.056 END TEST nvmf_bdev_io_wait 00:18:22.056 ************************************ 00:18:22.056 05:13:59 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:22.056 05:13:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.056 05:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.056 05:13:59 -- common/autotest_common.sh@10 -- # set +x 00:18:22.314 ************************************ 00:18:22.314 START TEST nvmf_queue_depth 00:18:22.314 ************************************ 00:18:22.314 05:13:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:22.314 * Looking for test storage... 00:18:22.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.314 05:13:59 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.314 05:13:59 -- nvmf/common.sh@7 -- # uname -s 00:18:22.314 05:13:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.314 05:13:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.315 05:13:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.315 05:13:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.315 05:13:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.315 05:13:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.315 05:13:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.315 05:13:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.315 05:13:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.315 05:13:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.315 05:13:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.315 05:13:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.315 05:13:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.315 05:13:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.315 05:13:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.315 05:13:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.315 05:13:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.315 05:13:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.315 05:13:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.315 05:13:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.315 05:13:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.315 05:13:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.315 05:13:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.315 05:13:59 -- paths/export.sh@5 -- # export PATH 00:18:22.315 05:13:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.315 05:13:59 -- nvmf/common.sh@47 -- # : 0 00:18:22.315 05:13:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.315 05:13:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.315 05:13:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.315 05:13:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.315 05:13:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.315 05:13:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.315 05:13:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.315 05:13:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.315 05:13:59 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:22.315 05:13:59 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:22.315 05:13:59 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.315 05:13:59 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:22.315 05:13:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:22.315 05:13:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.315 05:13:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:22.315 05:13:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:22.315 05:13:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:22.315 05:13:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.315 05:13:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.315 05:13:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.315 05:13:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:22.315 05:13:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:22.315 05:13:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.315 05:13:59 -- common/autotest_common.sh@10 -- # set +x 00:18:24.219 05:14:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:24.219 05:14:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.219 05:14:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.219 05:14:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.219 05:14:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.219 05:14:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.219 05:14:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.219 05:14:01 -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.219 05:14:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.219 05:14:01 -- nvmf/common.sh@296 -- # e810=() 00:18:24.219 05:14:01 -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.219 05:14:01 -- nvmf/common.sh@297 -- # x722=() 00:18:24.219 05:14:01 -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.219 05:14:01 -- nvmf/common.sh@298 -- # mlx=() 00:18:24.219 05:14:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.219 05:14:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.219 05:14:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.219 05:14:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:24.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:24.219 05:14:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.219 05:14:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:24.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:24.219 05:14:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.219 05:14:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.219 05:14:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.219 05:14:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:24.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:24.219 05:14:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.219 05:14:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.219 05:14:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.219 05:14:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:24.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:24.219 05:14:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:24.219 05:14:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:24.219 05:14:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.219 05:14:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.219 05:14:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.219 05:14:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.219 05:14:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.219 05:14:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.219 05:14:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.219 05:14:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.219 05:14:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.219 05:14:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.219 05:14:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.219 05:14:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.219 05:14:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.219 05:14:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.219 05:14:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.219 05:14:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.219 05:14:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.219 05:14:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.219 05:14:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:18:24.219 00:18:24.219 --- 10.0.0.2 ping statistics --- 00:18:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.219 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:24.219 05:14:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:18:24.219 00:18:24.219 --- 10.0.0.1 ping statistics --- 00:18:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.219 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:24.219 05:14:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.219 05:14:01 -- nvmf/common.sh@411 -- # return 0 00:18:24.219 05:14:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:24.219 05:14:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.219 05:14:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:24.219 05:14:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:24.220 05:14:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.220 05:14:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:24.220 05:14:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:24.477 05:14:01 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:24.477 05:14:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.477 05:14:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.477 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.477 05:14:01 -- nvmf/common.sh@470 -- # nvmfpid=1884295 00:18:24.477 05:14:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.477 05:14:01 -- nvmf/common.sh@471 -- # waitforlisten 1884295 00:18:24.477 05:14:01 -- common/autotest_common.sh@817 -- # '[' -z 1884295 ']' 00:18:24.477 05:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.477 05:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.477 05:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.477 05:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.477 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.477 [2024-04-24 05:14:01.544001] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:24.477 [2024-04-24 05:14:01.544069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.477 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.477 [2024-04-24 05:14:01.580481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:24.477 [2024-04-24 05:14:01.609066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.477 [2024-04-24 05:14:01.692259] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.477 [2024-04-24 05:14:01.692308] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.477 [2024-04-24 05:14:01.692338] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.477 [2024-04-24 05:14:01.692351] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.477 [2024-04-24 05:14:01.692362] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.477 [2024-04-24 05:14:01.692409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.735 05:14:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.735 05:14:01 -- common/autotest_common.sh@850 -- # return 0 00:18:24.735 05:14:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.735 05:14:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 05:14:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.735 05:14:01 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.735 05:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 [2024-04-24 05:14:01.824057] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.735 05:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.735 05:14:01 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.735 05:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 Malloc0 00:18:24.735 05:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.735 05:14:01 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:24.735 05:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 05:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.735 05:14:01 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.735 05:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 05:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.735 05:14:01 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.735 05:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 [2024-04-24 05:14:01.892408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.735 05:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.735 05:14:01 -- target/queue_depth.sh@30 -- # bdevperf_pid=1884337 00:18:24.735 05:14:01 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:24.735 05:14:01 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.735 05:14:01 -- target/queue_depth.sh@33 -- # waitforlisten 1884337 /var/tmp/bdevperf.sock 00:18:24.735 05:14:01 -- common/autotest_common.sh@817 -- # '[' -z 1884337 ']' 00:18:24.735 05:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.735 05:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.735 05:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.735 05:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.735 05:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:24.735 [2024-04-24 05:14:01.939124] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:24.735 [2024-04-24 05:14:01.939207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884337 ] 00:18:24.735 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.735 [2024-04-24 05:14:01.974866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:24.995 [2024-04-24 05:14:02.007215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.995 [2024-04-24 05:14:02.096505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.995 05:14:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.995 05:14:02 -- common/autotest_common.sh@850 -- # return 0 00:18:24.995 05:14:02 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.995 05:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.995 05:14:02 -- common/autotest_common.sh@10 -- # set +x 00:18:25.259 NVMe0n1 00:18:25.260 05:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.260 05:14:02 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.260 Running I/O for 10 seconds... 00:18:35.247 00:18:35.247 Latency(us) 00:18:35.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.247 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:35.247 Verification LBA range: start 0x0 length 0x4000 00:18:35.247 NVMe0n1 : 10.10 8485.06 33.14 0.00 0.00 120118.76 23981.32 76895.57 00:18:35.247 =================================================================================================================== 00:18:35.247 Total : 8485.06 33.14 0.00 0.00 120118.76 23981.32 76895.57 00:18:35.247 0 00:18:35.247 05:14:12 -- target/queue_depth.sh@39 -- # killprocess 1884337 00:18:35.247 05:14:12 -- common/autotest_common.sh@936 -- # '[' -z 1884337 ']' 00:18:35.247 05:14:12 -- common/autotest_common.sh@940 -- # kill -0 1884337 00:18:35.247 05:14:12 -- common/autotest_common.sh@941 -- # uname 00:18:35.247 05:14:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.247 05:14:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1884337 00:18:35.505 05:14:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.505 05:14:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.505 05:14:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1884337' 00:18:35.505 killing process with pid 1884337 00:18:35.505 05:14:12 -- common/autotest_common.sh@955 -- # kill 1884337 00:18:35.505 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.505 00:18:35.505 Latency(us) 00:18:35.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.505 =================================================================================================================== 00:18:35.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.505 05:14:12 -- common/autotest_common.sh@960 -- # wait 1884337 00:18:35.505 05:14:12 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:35.505 05:14:12 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:35.505 05:14:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:35.505 05:14:12 -- nvmf/common.sh@117 -- # sync 00:18:35.505 05:14:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.505 05:14:12 -- nvmf/common.sh@120 -- # set +e 00:18:35.505 05:14:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.505 05:14:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.505 rmmod nvme_tcp 00:18:35.764 rmmod nvme_fabrics 00:18:35.764 rmmod nvme_keyring 00:18:35.764 05:14:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.764 05:14:12 -- nvmf/common.sh@124 -- # set -e 00:18:35.764 05:14:12 -- nvmf/common.sh@125 -- # return 0 00:18:35.764 05:14:12 -- nvmf/common.sh@478 -- # '[' -n 1884295 ']' 00:18:35.764 05:14:12 -- nvmf/common.sh@479 -- # killprocess 1884295 00:18:35.764 05:14:12 -- common/autotest_common.sh@936 -- # '[' -z 1884295 ']' 00:18:35.764 05:14:12 -- common/autotest_common.sh@940 -- # kill -0 1884295 00:18:35.764 05:14:12 -- common/autotest_common.sh@941 -- # uname 00:18:35.764 05:14:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.764 05:14:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1884295 00:18:35.764 05:14:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.764 05:14:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.764 05:14:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1884295' 00:18:35.764 killing process with pid 1884295 00:18:35.764 05:14:12 -- common/autotest_common.sh@955 -- # kill 1884295 00:18:35.764 05:14:12 -- common/autotest_common.sh@960 -- # wait 1884295 00:18:36.023 05:14:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:36.024 05:14:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:36.024 05:14:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:36.024 05:14:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.024 05:14:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.024 05:14:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.024 05:14:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.024 05:14:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.927 05:14:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.927 00:18:37.927 real 0m15.753s 00:18:37.927 user 0m22.232s 00:18:37.927 sys 0m2.934s 00:18:37.927 05:14:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:37.927 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 ************************************ 00:18:37.927 END TEST nvmf_queue_depth 00:18:37.927 ************************************ 00:18:37.927 05:14:15 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:37.927 05:14:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.927 05:14:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.927 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:18:38.186 ************************************ 00:18:38.186 START TEST nvmf_multipath 00:18:38.186 ************************************ 00:18:38.186 05:14:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:38.186 * Looking for test storage... 00:18:38.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.186 05:14:15 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.186 05:14:15 -- nvmf/common.sh@7 -- # uname -s 00:18:38.186 05:14:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.186 05:14:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.186 05:14:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.186 05:14:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.186 05:14:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.186 05:14:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.186 05:14:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.186 05:14:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.186 05:14:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.186 05:14:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.186 05:14:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.186 05:14:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.186 05:14:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.186 05:14:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.186 05:14:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.186 05:14:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.186 05:14:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.186 05:14:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.186 05:14:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.186 05:14:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.186 05:14:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.186 05:14:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.186 05:14:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.186 05:14:15 -- paths/export.sh@5 -- # export PATH 00:18:38.186 05:14:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.186 05:14:15 -- nvmf/common.sh@47 -- # : 0 00:18:38.186 05:14:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.186 05:14:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.186 05:14:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.186 05:14:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.186 05:14:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.186 05:14:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.186 05:14:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.186 05:14:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.186 05:14:15 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.186 05:14:15 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.186 05:14:15 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:38.186 05:14:15 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.186 05:14:15 -- target/multipath.sh@43 -- # nvmftestinit 00:18:38.187 05:14:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:38.187 05:14:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.187 05:14:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:38.187 05:14:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:38.187 05:14:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:38.187 05:14:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.187 05:14:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.187 05:14:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.187 05:14:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:38.187 05:14:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:38.187 05:14:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.187 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:18:40.722 05:14:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.722 05:14:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.722 05:14:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.722 05:14:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.722 05:14:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.722 05:14:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.722 05:14:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.722 05:14:17 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.722 05:14:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.722 05:14:17 -- nvmf/common.sh@296 -- # e810=() 00:18:40.722 05:14:17 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.722 05:14:17 -- nvmf/common.sh@297 -- # x722=() 00:18:40.722 05:14:17 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.722 05:14:17 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.722 05:14:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.722 05:14:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.722 05:14:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.722 05:14:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.722 05:14:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.722 05:14:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.722 05:14:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.723 05:14:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.723 05:14:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:40.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:40.723 05:14:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.723 05:14:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:40.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:40.723 05:14:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.723 05:14:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.723 05:14:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.723 05:14:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:40.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:40.723 05:14:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.723 05:14:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.723 05:14:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.723 05:14:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:40.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:40.723 05:14:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:40.723 05:14:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:40.723 05:14:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.723 05:14:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.723 05:14:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.723 05:14:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.723 05:14:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.723 05:14:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.723 05:14:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.723 05:14:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.723 05:14:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.723 05:14:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.723 05:14:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.723 05:14:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.723 05:14:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.723 05:14:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.723 05:14:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.723 05:14:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.723 05:14:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.723 05:14:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.723 05:14:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:18:40.723 00:18:40.723 --- 10.0.0.2 ping statistics --- 00:18:40.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.723 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:40.723 05:14:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:40.723 00:18:40.723 --- 10.0.0.1 ping statistics --- 00:18:40.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.723 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:40.723 05:14:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.723 05:14:17 -- nvmf/common.sh@411 -- # return 0 00:18:40.723 05:14:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.723 05:14:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.723 05:14:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.723 05:14:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:40.723 05:14:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:40.723 05:14:17 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:40.723 05:14:17 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:40.723 only one NIC for nvmf test 00:18:40.723 05:14:17 -- target/multipath.sh@47 -- # nvmftestfini 00:18:40.723 05:14:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:40.723 05:14:17 -- nvmf/common.sh@117 -- # sync 00:18:40.723 05:14:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.723 05:14:17 -- nvmf/common.sh@120 -- # set +e 00:18:40.723 05:14:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.723 05:14:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.723 rmmod nvme_tcp 00:18:40.723 rmmod nvme_fabrics 00:18:40.723 rmmod nvme_keyring 00:18:40.723 05:14:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.723 05:14:17 -- nvmf/common.sh@124 -- # set -e 00:18:40.723 05:14:17 -- nvmf/common.sh@125 -- # return 0 00:18:40.723 05:14:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:40.723 05:14:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:40.723 05:14:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:40.723 05:14:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.723 05:14:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.723 05:14:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.723 05:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.723 05:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.625 05:14:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.625 05:14:19 -- target/multipath.sh@48 -- # exit 0 00:18:42.625 05:14:19 -- target/multipath.sh@1 -- # nvmftestfini 00:18:42.625 05:14:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:42.625 05:14:19 -- nvmf/common.sh@117 -- # sync 00:18:42.625 05:14:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.625 05:14:19 -- nvmf/common.sh@120 -- # set +e 00:18:42.625 05:14:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.626 05:14:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.626 05:14:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.626 05:14:19 -- nvmf/common.sh@124 -- # set -e 00:18:42.626 05:14:19 -- nvmf/common.sh@125 -- # return 0 00:18:42.626 05:14:19 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:42.626 05:14:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:42.626 05:14:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.626 05:14:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.626 05:14:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.626 05:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.626 05:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.626 05:14:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.626 00:18:42.626 real 0m4.414s 00:18:42.626 user 0m0.855s 00:18:42.626 sys 0m1.544s 00:18:42.626 05:14:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:42.626 05:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.626 ************************************ 00:18:42.626 END TEST nvmf_multipath 00:18:42.626 ************************************ 00:18:42.626 05:14:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:42.626 05:14:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.626 05:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.626 05:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.626 ************************************ 00:18:42.626 START TEST nvmf_zcopy 00:18:42.626 ************************************ 00:18:42.626 05:14:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:42.626 * Looking for test storage... 00:18:42.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.626 05:14:19 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.626 05:14:19 -- nvmf/common.sh@7 -- # uname -s 00:18:42.626 05:14:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.626 05:14:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.626 05:14:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.626 05:14:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.626 05:14:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.626 05:14:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.626 05:14:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.626 05:14:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.626 05:14:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.626 05:14:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.626 05:14:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.626 05:14:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.626 05:14:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.626 05:14:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.626 05:14:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.626 05:14:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.626 05:14:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.626 05:14:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.626 05:14:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.626 05:14:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.626 05:14:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.626 05:14:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.626 05:14:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.626 05:14:19 -- paths/export.sh@5 -- # export PATH 00:18:42.626 05:14:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.626 05:14:19 -- nvmf/common.sh@47 -- # : 0 00:18:42.626 05:14:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.626 05:14:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.626 05:14:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.626 05:14:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.626 05:14:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.626 05:14:19 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:42.626 05:14:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:42.626 05:14:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.626 05:14:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:42.626 05:14:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:42.626 05:14:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:42.626 05:14:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.626 05:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.626 05:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.626 05:14:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:42.626 05:14:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:42.626 05:14:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.626 05:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:44.540 05:14:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:44.540 05:14:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.540 05:14:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.541 05:14:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.541 05:14:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.541 05:14:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.541 05:14:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.541 05:14:21 -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.541 05:14:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.541 05:14:21 -- nvmf/common.sh@296 -- # e810=() 00:18:44.541 05:14:21 -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.541 05:14:21 -- nvmf/common.sh@297 -- # x722=() 00:18:44.541 05:14:21 -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.541 05:14:21 -- nvmf/common.sh@298 -- # mlx=() 00:18:44.541 05:14:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.541 05:14:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.541 05:14:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.541 05:14:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.541 05:14:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.541 05:14:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:44.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:44.541 05:14:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.541 05:14:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:44.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:44.541 05:14:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.541 05:14:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.541 05:14:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.541 05:14:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:44.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:44.541 05:14:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.541 05:14:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.541 05:14:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.541 05:14:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.541 05:14:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:44.541 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:44.541 05:14:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.541 05:14:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:44.541 05:14:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:44.541 05:14:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:44.541 05:14:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.541 05:14:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.541 05:14:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.541 05:14:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.541 05:14:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.541 05:14:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.541 05:14:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.541 05:14:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.541 05:14:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.541 05:14:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.541 05:14:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.541 05:14:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.541 05:14:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.799 05:14:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.799 05:14:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.799 05:14:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.799 05:14:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.799 05:14:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.799 05:14:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.799 05:14:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:18:44.799 00:18:44.799 --- 10.0.0.2 ping statistics --- 00:18:44.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.799 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:44.799 05:14:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:18:44.799 00:18:44.799 --- 10.0.0.1 ping statistics --- 00:18:44.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.799 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:18:44.799 05:14:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.799 05:14:21 -- nvmf/common.sh@411 -- # return 0 00:18:44.799 05:14:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:44.799 05:14:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.799 05:14:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:44.799 05:14:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:44.799 05:14:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.799 05:14:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:44.799 05:14:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:44.799 05:14:21 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:44.799 05:14:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:44.799 05:14:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:44.799 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:18:44.799 05:14:21 -- nvmf/common.sh@470 -- # nvmfpid=1889507 00:18:44.799 05:14:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.799 05:14:21 -- nvmf/common.sh@471 -- # waitforlisten 1889507 00:18:44.799 05:14:21 -- common/autotest_common.sh@817 -- # '[' -z 1889507 ']' 00:18:44.799 05:14:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.799 05:14:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:44.799 05:14:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.799 05:14:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:44.799 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:18:44.799 [2024-04-24 05:14:21.995388] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:44.799 [2024-04-24 05:14:21.995469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.799 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.799 [2024-04-24 05:14:22.032673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:44.799 [2024-04-24 05:14:22.064338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.059 [2024-04-24 05:14:22.152344] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.059 [2024-04-24 05:14:22.152414] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.059 [2024-04-24 05:14:22.152431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.059 [2024-04-24 05:14:22.152444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.059 [2024-04-24 05:14:22.152456] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.059 [2024-04-24 05:14:22.152491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.059 05:14:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:45.059 05:14:22 -- common/autotest_common.sh@850 -- # return 0 00:18:45.059 05:14:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:45.059 05:14:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:45.059 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 05:14:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.059 05:14:22 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:45.059 05:14:22 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:45.059 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.059 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 [2024-04-24 05:14:22.301306] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.059 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.059 05:14:22 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:45.059 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.059 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.059 05:14:22 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.059 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.059 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 [2024-04-24 05:14:22.317536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.059 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.059 05:14:22 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:45.059 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.059 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.059 05:14:22 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:45.318 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.318 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.318 malloc0 00:18:45.318 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.318 05:14:22 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.318 05:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.318 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.318 05:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.318 05:14:22 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:45.318 05:14:22 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:45.318 05:14:22 -- nvmf/common.sh@521 -- # config=() 00:18:45.318 05:14:22 -- nvmf/common.sh@521 -- # local subsystem config 00:18:45.318 05:14:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:45.318 05:14:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:45.318 { 00:18:45.318 "params": { 00:18:45.318 "name": "Nvme$subsystem", 00:18:45.318 "trtype": "$TEST_TRANSPORT", 00:18:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.318 "adrfam": "ipv4", 00:18:45.318 "trsvcid": "$NVMF_PORT", 00:18:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.318 "hdgst": ${hdgst:-false}, 00:18:45.318 "ddgst": ${ddgst:-false} 00:18:45.318 }, 00:18:45.318 "method": "bdev_nvme_attach_controller" 00:18:45.318 } 00:18:45.318 EOF 00:18:45.318 )") 00:18:45.318 05:14:22 -- nvmf/common.sh@543 -- # cat 00:18:45.318 05:14:22 -- nvmf/common.sh@545 -- # jq . 00:18:45.318 05:14:22 -- nvmf/common.sh@546 -- # IFS=, 00:18:45.318 05:14:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:45.318 "params": { 00:18:45.318 "name": "Nvme1", 00:18:45.318 "trtype": "tcp", 00:18:45.318 "traddr": "10.0.0.2", 00:18:45.318 "adrfam": "ipv4", 00:18:45.318 "trsvcid": "4420", 00:18:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.318 "hdgst": false, 00:18:45.318 "ddgst": false 00:18:45.318 }, 00:18:45.318 "method": "bdev_nvme_attach_controller" 00:18:45.318 }' 00:18:45.318 [2024-04-24 05:14:22.400722] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:45.318 [2024-04-24 05:14:22.400801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889527 ] 00:18:45.318 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.318 [2024-04-24 05:14:22.438812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:45.318 [2024-04-24 05:14:22.471121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.318 [2024-04-24 05:14:22.563820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.886 Running I/O for 10 seconds... 00:18:55.890 00:18:55.890 Latency(us) 00:18:55.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.890 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:55.890 Verification LBA range: start 0x0 length 0x1000 00:18:55.890 Nvme1n1 : 10.01 5564.50 43.47 0.00 0.00 22940.76 4053.52 33010.73 00:18:55.890 =================================================================================================================== 00:18:55.890 Total : 5564.50 43.47 0.00 0.00 22940.76 4053.52 33010.73 00:18:55.890 05:14:33 -- target/zcopy.sh@39 -- # perfpid=1890836 00:18:55.890 05:14:33 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:55.890 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:18:55.890 05:14:33 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:55.890 05:14:33 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:55.890 05:14:33 -- nvmf/common.sh@521 -- # config=() 00:18:55.890 05:14:33 -- nvmf/common.sh@521 -- # local subsystem config 00:18:55.890 05:14:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:55.890 05:14:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:55.890 { 00:18:55.890 "params": { 00:18:55.890 "name": "Nvme$subsystem", 00:18:55.890 "trtype": "$TEST_TRANSPORT", 00:18:55.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.890 "adrfam": "ipv4", 00:18:55.890 "trsvcid": "$NVMF_PORT", 00:18:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.890 "hdgst": ${hdgst:-false}, 00:18:55.890 "ddgst": ${ddgst:-false} 00:18:55.890 }, 00:18:55.890 "method": "bdev_nvme_attach_controller" 00:18:55.890 } 00:18:55.890 EOF 00:18:55.890 )") 00:18:55.890 05:14:33 -- nvmf/common.sh@543 -- # cat 00:18:55.890 [2024-04-24 05:14:33.131023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.890 [2024-04-24 05:14:33.131087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.890 05:14:33 -- nvmf/common.sh@545 -- # jq . 00:18:55.890 05:14:33 -- nvmf/common.sh@546 -- # IFS=, 00:18:55.890 05:14:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:55.890 "params": { 00:18:55.890 "name": "Nvme1", 00:18:55.890 "trtype": "tcp", 00:18:55.890 "traddr": "10.0.0.2", 00:18:55.890 "adrfam": "ipv4", 00:18:55.890 "trsvcid": "4420", 00:18:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.890 "hdgst": false, 00:18:55.890 "ddgst": false 00:18:55.890 }, 00:18:55.890 "method": "bdev_nvme_attach_controller" 00:18:55.890 }' 00:18:55.890 [2024-04-24 05:14:33.139000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.890 [2024-04-24 05:14:33.139023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.890 [2024-04-24 05:14:33.147016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.890 [2024-04-24 05:14:33.147036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.890 [2024-04-24 05:14:33.155022] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.890 [2024-04-24 05:14:33.155042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.163044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.163065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.171015] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:18:56.150 [2024-04-24 05:14:33.171073] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.171093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.171109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890836 ] 00:18:56.150 [2024-04-24 05:14:33.179085] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.179104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.187108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.187127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.195149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.195173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.150 [2024-04-24 05:14:33.202938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:56.150 [2024-04-24 05:14:33.203173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.203197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.211194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.211218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.219216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.219240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.227238] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.150 [2024-04-24 05:14:33.227262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.150 [2024-04-24 05:14:33.234418] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.150 [2024-04-24 05:14:33.235258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.235282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.243318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.243361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.251327] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.251363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.259328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.259354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.267349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.267373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.275371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.275396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.283395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.283420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.291449] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.291489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.299441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.299466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.307464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.307489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.315483] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.315508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.323504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.323527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.325744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.151 [2024-04-24 05:14:33.331528] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.331551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.339556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.339583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.347606] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.347677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.355643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.355696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.363677] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.363717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.371699] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.371738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.379719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.379760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.387747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.387797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.395726] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.395759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.403773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.403820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.411776] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.411812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.151 [2024-04-24 05:14:33.419773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.151 [2024-04-24 05:14:33.419796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.427804] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.427827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.435814] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.435836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.443874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.443900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.451866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.451890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.459890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.459932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.467930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.467958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.475951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.475978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.483970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.484010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.492004] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.492029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.500025] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.500049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.508043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.508066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.516066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.516089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.524092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.524119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.532109] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.532134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.540131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.540156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.548167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.548192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.556176] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.556208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.564202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.564227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.572222] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.572248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.580239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.580263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.588263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.588287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.596285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.596310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.604310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.604334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.612334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.612359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.620618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.620658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.628385] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.628412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 Running I/O for 5 seconds... 00:18:56.410 [2024-04-24 05:14:33.636408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.636433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.651963] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.651997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.663888] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.663933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.410 [2024-04-24 05:14:33.675191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.410 [2024-04-24 05:14:33.675221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.668 [2024-04-24 05:14:33.686883] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.668 [2024-04-24 05:14:33.686929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.668 [2024-04-24 05:14:33.698397] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.668 [2024-04-24 05:14:33.698427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.710422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.710452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.721876] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.721919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.733380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.733410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.745093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.745130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.756562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.756592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.768227] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.768257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.779685] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.779712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.791617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.791655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.803530] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.803559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.815416] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.815445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.827223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.827253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.840417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.840448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.851267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.851296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.862981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.863011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.874617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.874672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.888058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.888088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.899099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.899129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.910787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.910814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.922652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.922695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.669 [2024-04-24 05:14:33.933984] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.669 [2024-04-24 05:14:33.934014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:33.945607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:33.945647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:33.957582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:33.957612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:33.969048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:33.969078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:33.980506] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:33.980536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:33.992445] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:33.992474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.006396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.006426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.017511] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.017541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.029254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.029283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.040722] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.040750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.052526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.052556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.927 [2024-04-24 05:14:34.064488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.927 [2024-04-24 05:14:34.064518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.076201] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.076231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.087872] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.087900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.099517] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.099547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.112623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.112679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.123151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.123180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.135728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.135755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.147608] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.147649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.159606] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.159645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.171009] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.171039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.182624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.182678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.928 [2024-04-24 05:14:34.194252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.928 [2024-04-24 05:14:34.194282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.205584] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.205614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.217465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.217496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.229105] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.229135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.240837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.240864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.252841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.252868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.264522] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.264551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.275212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.275239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.286183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.286209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.299121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.299148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.309640] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.309666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.320177] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.320204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.330747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.330774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.341428] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.341455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.353896] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.353922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.363826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.363853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.374720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.374746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.385757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.385783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.396374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.396400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.407165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.407191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.419891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.419917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.430315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.430342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.440745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.440772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.186 [2024-04-24 05:14:34.451256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.186 [2024-04-24 05:14:34.451282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.445 [2024-04-24 05:14:34.462263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.445 [2024-04-24 05:14:34.462290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.445 [2024-04-24 05:14:34.473233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.445 [2024-04-24 05:14:34.473260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.445 [2024-04-24 05:14:34.484012] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.484040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.495409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.495436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.506564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.506591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.517173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.517200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.527396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.527423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.537822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.537849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.548767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.548794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.561519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.561546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.572191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.572218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.582985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.583012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.593775] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.593802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.604425] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.604458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.617084] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.617110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.627045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.627072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.637450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.637477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.648157] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.648184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.659239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.659265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.671939] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.671965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.682049] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.682076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.692950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.692977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.703940] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.703967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.446 [2024-04-24 05:14:34.714336] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.446 [2024-04-24 05:14:34.714363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.725342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.725369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.735982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.736009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.747019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.747045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.757703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.757729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.768437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.768463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.779354] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.779380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.792404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.792431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.802575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.802601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.813309] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.813345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.823721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.823748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.834377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.834404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.845194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.845221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.855677] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.855704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.866020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.866050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.877524] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.877555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.889593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.889623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.901198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.901228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.912695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.912722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.924622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.924677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.936165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.936194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.947515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.947546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.959044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.959074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.705 [2024-04-24 05:14:34.971136] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.705 [2024-04-24 05:14:34.971166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:34.982681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:34.982709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:34.994240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:34.994269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.005795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.005823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.017593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.017622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.029568] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.029607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.041355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.041385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.052746] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.052774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.066488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.066516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.077298] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.077328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.089236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.089265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.100985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.101015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.112380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.112409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.123964] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.965 [2024-04-24 05:14:35.123994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.965 [2024-04-24 05:14:35.135824] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.135851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.147583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.147612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.159019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.159048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.170991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.171021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.184717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.184744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.195636] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.195680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.207294] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.207324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.219185] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.219214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.966 [2024-04-24 05:14:35.230894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.966 [2024-04-24 05:14:35.230937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.225 [2024-04-24 05:14:35.242388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.225 [2024-04-24 05:14:35.242418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.225 [2024-04-24 05:14:35.254286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.225 [2024-04-24 05:14:35.254323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.225 [2024-04-24 05:14:35.266441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.225 [2024-04-24 05:14:35.266470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.225 [2024-04-24 05:14:35.277923] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.225 [2024-04-24 05:14:35.277967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.225 [2024-04-24 05:14:35.289643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.289688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.300738] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.300765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.312352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.312381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.324048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.324078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.335583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.335612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.349045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.349074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.360143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.360172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.371778] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.371804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.383260] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.383290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.394778] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.394805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.406535] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.406564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.418222] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.418251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.429943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.429972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.441815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.441842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.453807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.453835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.465169] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.465199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.476679] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.476713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.226 [2024-04-24 05:14:35.488527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.226 [2024-04-24 05:14:35.488556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.500920] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.485 [2024-04-24 05:14:35.500964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.513182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.485 [2024-04-24 05:14:35.513211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.525068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.485 [2024-04-24 05:14:35.525098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.536642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.485 [2024-04-24 05:14:35.536685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.548724] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.485 [2024-04-24 05:14:35.548751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.485 [2024-04-24 05:14:35.560230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.560260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.571940] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.571970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.583885] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.583927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.595292] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.595322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.607128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.607158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.620948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.620978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.631705] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.631732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.643591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.643621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.654895] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.654939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.666046] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.666076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.677212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.677242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.688641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.688684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.700348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.700378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.711897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.711925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.722813] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.722840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.734414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.734444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.486 [2024-04-24 05:14:35.745676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.486 [2024-04-24 05:14:35.745702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.756860] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.756892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.768299] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.768329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.779504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.779533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.790774] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.790801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.804481] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.804510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.815863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.815891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.827593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.827622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.839091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.839121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.850470] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.850499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.862035] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.862065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.873163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.873193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.884160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.884191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.895773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.895800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.907074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.907103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.918360] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.918390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.929761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.929788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.943201] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.943230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.953932] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.953962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.965433] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.965463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.977028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.977057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:35.988841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:35.988868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:36.000701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:36.000728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.744 [2024-04-24 05:14:36.012123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.744 [2024-04-24 05:14:36.012154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.023761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.023788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.036970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.037000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.047079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.047109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.059258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.059288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.070930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.070972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.082431] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.082458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.093583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.093611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.105079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.105106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.115746] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.115772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.002 [2024-04-24 05:14:36.126701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.002 [2024-04-24 05:14:36.126727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.138278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.138307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.149909] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.149936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.161339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.161369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.173475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.173505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.185151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.185181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.196822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.196850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.208326] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.208356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.220144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.220174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.231005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.231031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.242921] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.242964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.254796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.254823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.003 [2024-04-24 05:14:36.266467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.003 [2024-04-24 05:14:36.266497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.277980] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.278011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.289805] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.289832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.301296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.301326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.313313] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.313344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.325175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.325205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.337132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.337162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.348800] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.348835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.362313] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.362343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.373560] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.373589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.385491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.385521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.396917] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.396944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.408512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.408542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.422099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.422129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.433310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.433340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.444969] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.444999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.456278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.456308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.467318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.467347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.478920] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.478965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.490655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.490697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.501882] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.501908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.513545] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.513574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.260 [2024-04-24 05:14:36.525343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.260 [2024-04-24 05:14:36.525372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.536858] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.536885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.548341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.548371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.559795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.559822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.571209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.571247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.582743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.582770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.594099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.594128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.605846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.605873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.617373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.617403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.630791] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.630818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.641558] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.641587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.652826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.652853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.666303] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.666332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.677078] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.677108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.688237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.688266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.701187] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.701216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.711758] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.711785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.723787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.723814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.735540] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.735569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.747365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.747395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.758091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.758120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.769375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.769405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.518 [2024-04-24 05:14:36.782998] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.518 [2024-04-24 05:14:36.783028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.793595] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.793642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.805460] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.805490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.817294] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.817324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.829215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.829244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.843189] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.843219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.854833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.854860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.868371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.868401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.879674] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.879701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.891222] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.891251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.903228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.903257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.915232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.915261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.926929] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.926971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.938777] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.938805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.950395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.950424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.961843] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.961869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.973023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.973053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.984777] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.984804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:36.995762] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:36.995788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.778 [2024-04-24 05:14:37.007325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.778 [2024-04-24 05:14:37.007354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.779 [2024-04-24 05:14:37.018571] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.779 [2024-04-24 05:14:37.018609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.779 [2024-04-24 05:14:37.030131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.779 [2024-04-24 05:14:37.030160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.779 [2024-04-24 05:14:37.041823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.779 [2024-04-24 05:14:37.041849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.053544] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.053574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.064775] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.064801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.076719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.076745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.088307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.088337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.100041] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.100071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.113262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.113292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.037 [2024-04-24 05:14:37.123512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.037 [2024-04-24 05:14:37.123541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.134977] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.135007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.146219] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.146248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.157380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.157409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.168979] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.169009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.180952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.180982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.192599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.192636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.204830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.204857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.218303] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.218332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.229589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.229618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.240906] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.240960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.251875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.251902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.263004] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.263034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.276057] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.276087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.286437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.286466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.038 [2024-04-24 05:14:37.298026] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.038 [2024-04-24 05:14:37.298070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.310033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.310064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.323594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.323624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.334432] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.334462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.346924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.346954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.358473] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.358502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.370465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.370495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.296 [2024-04-24 05:14:37.381766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.296 [2024-04-24 05:14:37.381793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.393089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.393120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.404592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.404622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.416077] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.416107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.427622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.427679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.438933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.438976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.450806] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.450833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.462366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.462396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.473873] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.473901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.485636] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.485684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.497415] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.497445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.509110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.509139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.520704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.520731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.532424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.532454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.544019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.544049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.555512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.555541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.297 [2024-04-24 05:14:37.566935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.297 [2024-04-24 05:14:37.566963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.578086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.578113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.588575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.588602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.599377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.599404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.611935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.611961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.621829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.621855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.632623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.556 [2024-04-24 05:14:37.632656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.556 [2024-04-24 05:14:37.645170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.645197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.655499] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.655526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.665970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.665996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.676716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.676743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.689322] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.689350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.699521] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.699547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.709784] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.709810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.720621] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.720654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.732999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.733025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.742788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.742814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.754028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.754054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.766668] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.766695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.776388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.776416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.787062] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.787088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.799745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.799772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.811407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.811433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.557 [2024-04-24 05:14:37.820556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.557 [2024-04-24 05:14:37.820583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.815 [2024-04-24 05:14:37.831945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.831974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.842626] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.842665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.853712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.853738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.866014] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.866041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.877519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.877546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.887809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.887837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.898701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.898728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.911146] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.911172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.921314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.921340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.931880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.931907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.943025] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.943053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.953552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.953578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.964533] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.964559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.975613] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.975648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.986526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.986553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:37.997434] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:37.997471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.010123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.010150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.020272] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.020299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.031077] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.031104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.043808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.043835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.053570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.053596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.064410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.064437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-04-24 05:14:38.076885] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-04-24 05:14:38.076912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.086813] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.086848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.098410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.098437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.109266] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.109293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.119755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.119781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.137574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.137604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.148104] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.148131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.158771] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.158798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.171408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.171435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.180795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.180821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.191742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.191769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.202585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.202612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.213487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.213513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.224472] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.224499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.237160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.237187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.247438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.247464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.258509] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.258535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.268516] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.268543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.280017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.280046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.293461] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.293491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.304135] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.304174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.316212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.316242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.328457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.328487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.078 [2024-04-24 05:14:38.340336] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.078 [2024-04-24 05:14:38.340366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.352445] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.352476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.365289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.365327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.377755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.377782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.389404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.389433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.401105] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.401134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.412529] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.412558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.424237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.424266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.437854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.437881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.448034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.448063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.460206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.460236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.472089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.472119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.483903] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.483946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.496043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.496073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.509509] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.509539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.520579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.520608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.532204] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.532242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.543760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.543788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.557268] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.557298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.568624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.568685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.581080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.581111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.592489] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.592522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.346 [2024-04-24 05:14:38.604256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.346 [2024-04-24 05:14:38.604285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.617890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.617927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.628648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.628692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.640307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.640337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.652038] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.652068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.659450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.659478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 00:19:01.606 Latency(us) 00:19:01.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.606 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:01.606 Nvme1n1 : 5.01 11174.72 87.30 0.00 0.00 11438.37 4320.52 23690.05 00:19:01.606 =================================================================================================================== 00:19:01.606 Total : 11174.72 87.30 0.00 0.00 11438.37 4320.52 23690.05 00:19:01.606 [2024-04-24 05:14:38.667467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.667495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.675477] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.675503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.683566] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.683620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.691582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.691642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.699598] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.699667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.707619] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.707675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.715648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.715699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.723684] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.723737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.731698] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.731749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.739735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.606 [2024-04-24 05:14:38.739789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.606 [2024-04-24 05:14:38.747756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.747831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.755780] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.755833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.763815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.763872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.771801] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.771854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.779817] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.779866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.787843] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.787893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.795878] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.795928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.803847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.803874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.811867] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.811896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.819933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.819984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.827953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.828004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.835982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.836031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.843963] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.843990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.852042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.852090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.860040] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.860091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.868078] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.868141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.607 [2024-04-24 05:14:38.876042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.607 [2024-04-24 05:14:38.876067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.865 [2024-04-24 05:14:38.884066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.865 [2024-04-24 05:14:38.884093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.865 [2024-04-24 05:14:38.892082] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.865 [2024-04-24 05:14:38.892106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1890836) - No such process 00:19:01.865 05:14:38 -- target/zcopy.sh@49 -- # wait 1890836 00:19:01.865 05:14:38 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:01.865 05:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.865 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 05:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.865 05:14:38 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:01.865 05:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.865 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 delay0 00:19:01.865 05:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.865 05:14:38 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:01.865 05:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.865 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 05:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.865 05:14:38 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:01.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.865 [2024-04-24 05:14:38.971336] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:08.440 Initializing NVMe Controllers 00:19:08.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:08.440 Initialization complete. Launching workers. 00:19:08.440 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 197 00:19:08.440 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 484, failed to submit 33 00:19:08.440 success 263, unsuccess 221, failed 0 00:19:08.440 05:14:45 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:08.440 05:14:45 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:08.440 05:14:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:08.441 05:14:45 -- nvmf/common.sh@117 -- # sync 00:19:08.441 05:14:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.441 05:14:45 -- nvmf/common.sh@120 -- # set +e 00:19:08.441 05:14:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.441 05:14:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.441 rmmod nvme_tcp 00:19:08.441 rmmod nvme_fabrics 00:19:08.441 rmmod nvme_keyring 00:19:08.441 05:14:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.441 05:14:45 -- nvmf/common.sh@124 -- # set -e 00:19:08.441 05:14:45 -- nvmf/common.sh@125 -- # return 0 00:19:08.441 05:14:45 -- nvmf/common.sh@478 -- # '[' -n 1889507 ']' 00:19:08.441 05:14:45 -- nvmf/common.sh@479 -- # killprocess 1889507 00:19:08.441 05:14:45 -- common/autotest_common.sh@936 -- # '[' -z 1889507 ']' 00:19:08.441 05:14:45 -- common/autotest_common.sh@940 -- # kill -0 1889507 00:19:08.441 05:14:45 -- common/autotest_common.sh@941 -- # uname 00:19:08.441 05:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.441 05:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1889507 00:19:08.441 05:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:08.441 05:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:08.441 05:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1889507' 00:19:08.441 killing process with pid 1889507 00:19:08.441 05:14:45 -- common/autotest_common.sh@955 -- # kill 1889507 00:19:08.441 05:14:45 -- common/autotest_common.sh@960 -- # wait 1889507 00:19:08.441 05:14:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:08.441 05:14:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:08.441 05:14:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:08.441 05:14:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.441 05:14:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.441 05:14:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.441 05:14:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.441 05:14:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.348 05:14:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.348 00:19:10.348 real 0m27.643s 00:19:10.348 user 0m40.993s 00:19:10.348 sys 0m8.181s 00:19:10.348 05:14:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.348 05:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.348 ************************************ 00:19:10.348 END TEST nvmf_zcopy 00:19:10.348 ************************************ 00:19:10.348 05:14:47 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:10.348 05:14:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.348 05:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.348 05:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:10.348 ************************************ 00:19:10.348 START TEST nvmf_nmic 00:19:10.348 ************************************ 00:19:10.348 05:14:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:10.607 * Looking for test storage... 00:19:10.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.607 05:14:47 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.607 05:14:47 -- nvmf/common.sh@7 -- # uname -s 00:19:10.607 05:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.607 05:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.607 05:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.607 05:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.607 05:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.607 05:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.607 05:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.607 05:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.607 05:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.607 05:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.607 05:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.607 05:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.607 05:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.607 05:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.607 05:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.607 05:14:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.607 05:14:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.607 05:14:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.607 05:14:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.607 05:14:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.607 05:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.607 05:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.607 05:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.607 05:14:47 -- paths/export.sh@5 -- # export PATH 00:19:10.607 05:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.607 05:14:47 -- nvmf/common.sh@47 -- # : 0 00:19:10.607 05:14:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.607 05:14:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.607 05:14:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.607 05:14:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.607 05:14:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.607 05:14:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.607 05:14:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.607 05:14:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.607 05:14:47 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.607 05:14:47 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.607 05:14:47 -- target/nmic.sh@14 -- # nvmftestinit 00:19:10.607 05:14:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.607 05:14:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.607 05:14:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.607 05:14:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.608 05:14:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.608 05:14:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.608 05:14:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.608 05:14:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.608 05:14:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:10.608 05:14:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:10.608 05:14:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.608 05:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:12.514 05:14:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.514 05:14:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.514 05:14:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.514 05:14:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.514 05:14:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.514 05:14:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.514 05:14:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.514 05:14:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.514 05:14:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.514 05:14:49 -- nvmf/common.sh@296 -- # e810=() 00:19:12.514 05:14:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.514 05:14:49 -- nvmf/common.sh@297 -- # x722=() 00:19:12.514 05:14:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.514 05:14:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:12.514 05:14:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.514 05:14:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.514 05:14:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.514 05:14:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.514 05:14:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.514 05:14:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.514 05:14:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.514 05:14:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.514 05:14:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.514 05:14:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.514 05:14:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.514 05:14:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.514 05:14:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.514 05:14:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:12.514 05:14:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.514 05:14:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.514 05:14:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.515 05:14:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.515 05:14:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.515 05:14:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:12.515 05:14:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.515 05:14:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.515 05:14:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.515 05:14:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:12.515 05:14:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:12.515 05:14:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:12.515 05:14:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:12.515 05:14:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:12.515 05:14:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.515 05:14:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.515 05:14:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.515 05:14:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.515 05:14:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.515 05:14:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.515 05:14:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.515 05:14:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.515 05:14:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.515 05:14:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.515 05:14:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.515 05:14:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.515 05:14:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.515 05:14:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.515 05:14:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.515 05:14:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.515 05:14:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.515 05:14:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.515 05:14:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.515 05:14:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:19:12.515 00:19:12.515 --- 10.0.0.2 ping statistics --- 00:19:12.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.515 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:19:12.515 05:14:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:19:12.515 00:19:12.515 --- 10.0.0.1 ping statistics --- 00:19:12.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.515 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:12.515 05:14:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.515 05:14:49 -- nvmf/common.sh@411 -- # return 0 00:19:12.515 05:14:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:12.515 05:14:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.515 05:14:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:12.515 05:14:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:12.515 05:14:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.515 05:14:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:12.515 05:14:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:12.515 05:14:49 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:12.515 05:14:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:12.515 05:14:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:12.515 05:14:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.515 05:14:49 -- nvmf/common.sh@470 -- # nvmfpid=1894098 00:19:12.515 05:14:49 -- nvmf/common.sh@471 -- # waitforlisten 1894098 00:19:12.515 05:14:49 -- common/autotest_common.sh@817 -- # '[' -z 1894098 ']' 00:19:12.515 05:14:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.515 05:14:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.515 05:14:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:12.515 05:14:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.515 05:14:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:12.515 05:14:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.774 [2024-04-24 05:14:49.827720] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:12.774 [2024-04-24 05:14:49.827802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.774 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.774 [2024-04-24 05:14:49.872129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:12.774 [2024-04-24 05:14:49.901156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.774 [2024-04-24 05:14:49.987948] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.774 [2024-04-24 05:14:49.988011] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.774 [2024-04-24 05:14:49.988044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.774 [2024-04-24 05:14:49.988056] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.774 [2024-04-24 05:14:49.988066] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.774 [2024-04-24 05:14:49.988135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.774 [2024-04-24 05:14:49.988229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.774 [2024-04-24 05:14:49.988282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.774 [2024-04-24 05:14:49.988280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.034 05:14:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.034 05:14:50 -- common/autotest_common.sh@850 -- # return 0 00:19:13.034 05:14:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:13.034 05:14:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 05:14:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.034 05:14:50 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 [2024-04-24 05:14:50.148700] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 Malloc0 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 [2024-04-24 05:14:50.202845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:13.034 test case1: single bdev can't be used in multiple subsystems 00:19:13.034 05:14:50 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@28 -- # nmic_status=0 00:19:13.034 05:14:50 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:13.034 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.034 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 [2024-04-24 05:14:50.226695] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:13.034 [2024-04-24 05:14:50.226726] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:13.034 [2024-04-24 05:14:50.226748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.034 request: 00:19:13.034 { 00:19:13.034 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:13.034 "namespace": { 00:19:13.034 "bdev_name": "Malloc0", 00:19:13.034 "no_auto_visible": false 00:19:13.034 }, 00:19:13.034 "method": "nvmf_subsystem_add_ns", 00:19:13.034 "req_id": 1 00:19:13.034 } 00:19:13.034 Got JSON-RPC error response 00:19:13.034 response: 00:19:13.034 { 00:19:13.034 "code": -32602, 00:19:13.034 "message": "Invalid parameters" 00:19:13.034 } 00:19:13.034 05:14:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:13.034 05:14:50 -- target/nmic.sh@29 -- # nmic_status=1 00:19:13.034 05:14:50 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:13.034 05:14:50 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:13.034 Adding namespace failed - expected result. 00:19:13.034 05:14:50 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:13.034 test case2: host connect to nvmf target in multiple paths 00:19:13.035 05:14:50 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:13.035 05:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.035 05:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:13.035 [2024-04-24 05:14:50.238833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:13.035 05:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.035 05:14:50 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:13.972 05:14:50 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:14.541 05:14:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:14.541 05:14:51 -- common/autotest_common.sh@1184 -- # local i=0 00:19:14.541 05:14:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.541 05:14:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:14.541 05:14:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:16.449 05:14:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:16.449 05:14:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:16.449 05:14:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:16.449 05:14:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:16.449 05:14:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.449 05:14:53 -- common/autotest_common.sh@1194 -- # return 0 00:19:16.449 05:14:53 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:16.449 [global] 00:19:16.449 thread=1 00:19:16.449 invalidate=1 00:19:16.449 rw=write 00:19:16.449 time_based=1 00:19:16.449 runtime=1 00:19:16.449 ioengine=libaio 00:19:16.449 direct=1 00:19:16.449 bs=4096 00:19:16.449 iodepth=1 00:19:16.449 norandommap=0 00:19:16.449 numjobs=1 00:19:16.449 00:19:16.449 verify_dump=1 00:19:16.449 verify_backlog=512 00:19:16.449 verify_state_save=0 00:19:16.449 do_verify=1 00:19:16.449 verify=crc32c-intel 00:19:16.449 [job0] 00:19:16.449 filename=/dev/nvme0n1 00:19:16.449 Could not set queue depth (nvme0n1) 00:19:16.708 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:16.708 fio-3.35 00:19:16.708 Starting 1 thread 00:19:17.641 00:19:17.641 job0: (groupid=0, jobs=1): err= 0: pid=1894728: Wed Apr 24 05:14:54 2024 00:19:17.641 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:19:17.641 slat (nsec): min=8782, max=34224, avg=18840.45, stdev=8614.03 00:19:17.641 clat (usec): min=40735, max=41068, avg=40966.23, stdev=69.74 00:19:17.641 lat (usec): min=40744, max=41086, avg=40985.07, stdev=70.31 00:19:17.641 clat percentiles (usec): 00:19:17.641 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:17.641 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:17.641 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:17.641 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:17.641 | 99.99th=[41157] 00:19:17.641 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:19:17.641 slat (nsec): min=7690, max=55792, avg=16524.87, stdev=8437.04 00:19:17.641 clat (usec): min=166, max=316, avg=201.40, stdev=26.65 00:19:17.641 lat (usec): min=175, max=347, avg=217.93, stdev=30.64 00:19:17.641 clat percentiles (usec): 00:19:17.641 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:19:17.641 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:19:17.641 | 70.00th=[ 208], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 253], 00:19:17.641 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 318], 00:19:17.641 | 99.99th=[ 318] 00:19:17.641 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:17.641 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:17.641 lat (usec) : 250=90.45%, 500=5.43% 00:19:17.641 lat (msec) : 50=4.12% 00:19:17.641 cpu : usr=0.49%, sys=0.69%, ctx=535, majf=0, minf=2 00:19:17.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.641 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.641 00:19:17.641 Run status group 0 (all jobs): 00:19:17.641 READ: bw=86.7KiB/s (88.8kB/s), 86.7KiB/s-86.7KiB/s (88.8kB/s-88.8kB/s), io=88.0KiB (90.1kB), run=1015-1015msec 00:19:17.641 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:19:17.641 00:19:17.641 Disk stats (read/write): 00:19:17.641 nvme0n1: ios=71/512, merge=0/0, ticks=952/101, in_queue=1053, util=98.70% 00:19:17.641 05:14:54 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:17.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:17.899 05:14:55 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:17.899 05:14:55 -- common/autotest_common.sh@1205 -- # local i=0 00:19:17.899 05:14:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:17.899 05:14:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.899 05:14:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:17.899 05:14:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.899 05:14:55 -- common/autotest_common.sh@1217 -- # return 0 00:19:17.899 05:14:55 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:17.899 05:14:55 -- target/nmic.sh@53 -- # nvmftestfini 00:19:17.899 05:14:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.899 05:14:55 -- nvmf/common.sh@117 -- # sync 00:19:17.899 05:14:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.899 05:14:55 -- nvmf/common.sh@120 -- # set +e 00:19:17.899 05:14:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.899 05:14:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.899 rmmod nvme_tcp 00:19:17.899 rmmod nvme_fabrics 00:19:17.899 rmmod nvme_keyring 00:19:17.899 05:14:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.899 05:14:55 -- nvmf/common.sh@124 -- # set -e 00:19:17.899 05:14:55 -- nvmf/common.sh@125 -- # return 0 00:19:17.899 05:14:55 -- nvmf/common.sh@478 -- # '[' -n 1894098 ']' 00:19:17.899 05:14:55 -- nvmf/common.sh@479 -- # killprocess 1894098 00:19:17.899 05:14:55 -- common/autotest_common.sh@936 -- # '[' -z 1894098 ']' 00:19:17.899 05:14:55 -- common/autotest_common.sh@940 -- # kill -0 1894098 00:19:17.899 05:14:55 -- common/autotest_common.sh@941 -- # uname 00:19:17.899 05:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.899 05:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1894098 00:19:17.899 05:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:17.899 05:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:17.899 05:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1894098' 00:19:17.899 killing process with pid 1894098 00:19:17.899 05:14:55 -- common/autotest_common.sh@955 -- # kill 1894098 00:19:17.899 05:14:55 -- common/autotest_common.sh@960 -- # wait 1894098 00:19:18.159 05:14:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:18.159 05:14:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:18.159 05:14:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:18.159 05:14:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.159 05:14:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.159 05:14:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.159 05:14:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.159 05:14:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.708 05:14:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.708 00:19:20.708 real 0m9.827s 00:19:20.708 user 0m22.353s 00:19:20.708 sys 0m2.295s 00:19:20.708 05:14:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:20.708 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.708 ************************************ 00:19:20.708 END TEST nvmf_nmic 00:19:20.708 ************************************ 00:19:20.708 05:14:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:20.708 05:14:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:20.708 05:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:20.708 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:20.708 ************************************ 00:19:20.708 START TEST nvmf_fio_target 00:19:20.708 ************************************ 00:19:20.708 05:14:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:20.708 * Looking for test storage... 00:19:20.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.708 05:14:57 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.708 05:14:57 -- nvmf/common.sh@7 -- # uname -s 00:19:20.708 05:14:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.708 05:14:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.708 05:14:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.708 05:14:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.708 05:14:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.708 05:14:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.708 05:14:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.708 05:14:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.708 05:14:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.708 05:14:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.708 05:14:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.708 05:14:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.708 05:14:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.708 05:14:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.708 05:14:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.708 05:14:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.708 05:14:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.708 05:14:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.708 05:14:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.708 05:14:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.708 05:14:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.708 05:14:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.708 05:14:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.708 05:14:57 -- paths/export.sh@5 -- # export PATH 00:19:20.708 05:14:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.708 05:14:57 -- nvmf/common.sh@47 -- # : 0 00:19:20.708 05:14:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.708 05:14:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.709 05:14:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.709 05:14:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.709 05:14:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.709 05:14:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.709 05:14:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.709 05:14:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.709 05:14:57 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.709 05:14:57 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.709 05:14:57 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:20.709 05:14:57 -- target/fio.sh@16 -- # nvmftestinit 00:19:20.709 05:14:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:20.709 05:14:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.709 05:14:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:20.709 05:14:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:20.709 05:14:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:20.709 05:14:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.709 05:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.709 05:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.709 05:14:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:20.709 05:14:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:20.709 05:14:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.709 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:22.610 05:14:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:22.610 05:14:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.610 05:14:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.610 05:14:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.610 05:14:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.610 05:14:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.610 05:14:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.610 05:14:59 -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.610 05:14:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.610 05:14:59 -- nvmf/common.sh@296 -- # e810=() 00:19:22.610 05:14:59 -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.610 05:14:59 -- nvmf/common.sh@297 -- # x722=() 00:19:22.610 05:14:59 -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.610 05:14:59 -- nvmf/common.sh@298 -- # mlx=() 00:19:22.610 05:14:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.610 05:14:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.610 05:14:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.610 05:14:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.610 05:14:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.610 05:14:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.610 05:14:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.610 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.610 05:14:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.610 05:14:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.610 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.610 05:14:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.610 05:14:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.610 05:14:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.611 05:14:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.611 05:14:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.611 05:14:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.611 05:14:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.611 05:14:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.611 05:14:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.611 05:14:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.611 05:14:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.611 05:14:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.611 05:14:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.611 05:14:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.611 05:14:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:22.611 05:14:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:22.611 05:14:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:22.611 05:14:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:22.611 05:14:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:22.611 05:14:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.611 05:14:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.611 05:14:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.611 05:14:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.611 05:14:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.611 05:14:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.611 05:14:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.611 05:14:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.611 05:14:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.611 05:14:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.611 05:14:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.611 05:14:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.611 05:14:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.611 05:14:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.611 05:14:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.611 05:14:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.611 05:14:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.611 05:14:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.611 05:14:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.611 05:14:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:19:22.611 00:19:22.611 --- 10.0.0.2 ping statistics --- 00:19:22.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.611 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:22.611 05:14:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:22.611 00:19:22.611 --- 10.0.0.1 ping statistics --- 00:19:22.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.611 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:22.611 05:14:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.611 05:14:59 -- nvmf/common.sh@411 -- # return 0 00:19:22.611 05:14:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:22.611 05:14:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.611 05:14:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:22.611 05:14:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:22.611 05:14:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.611 05:14:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:22.611 05:14:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:22.611 05:14:59 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:22.611 05:14:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:22.611 05:14:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.611 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.611 05:14:59 -- nvmf/common.sh@470 -- # nvmfpid=1896814 00:19:22.611 05:14:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.611 05:14:59 -- nvmf/common.sh@471 -- # waitforlisten 1896814 00:19:22.611 05:14:59 -- common/autotest_common.sh@817 -- # '[' -z 1896814 ']' 00:19:22.611 05:14:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.611 05:14:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.611 05:14:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.611 05:14:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.611 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.611 [2024-04-24 05:14:59.851860] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:22.611 [2024-04-24 05:14:59.851952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.869 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.869 [2024-04-24 05:14:59.893434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:22.869 [2024-04-24 05:14:59.925102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.869 [2024-04-24 05:15:00.019696] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.869 [2024-04-24 05:15:00.019755] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.869 [2024-04-24 05:15:00.019774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.869 [2024-04-24 05:15:00.019792] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.869 [2024-04-24 05:15:00.019806] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.869 [2024-04-24 05:15:00.019916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.869 [2024-04-24 05:15:00.019947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.869 [2024-04-24 05:15:00.020001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.869 [2024-04-24 05:15:00.020004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.127 05:15:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:23.127 05:15:00 -- common/autotest_common.sh@850 -- # return 0 00:19:23.127 05:15:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:23.127 05:15:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:23.127 05:15:00 -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 05:15:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.127 05:15:00 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:23.384 [2024-04-24 05:15:00.414077] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.384 05:15:00 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.641 05:15:00 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:23.641 05:15:00 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.899 05:15:00 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:23.899 05:15:00 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.156 05:15:01 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:24.156 05:15:01 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.414 05:15:01 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:24.414 05:15:01 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:24.671 05:15:01 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.928 05:15:02 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:24.928 05:15:02 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:25.186 05:15:02 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:25.186 05:15:02 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:25.445 05:15:02 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:25.445 05:15:02 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:25.703 05:15:02 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:25.960 05:15:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:25.960 05:15:03 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.217 05:15:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:26.217 05:15:03 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:26.474 05:15:03 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.732 [2024-04-24 05:15:03.784302] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.732 05:15:03 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:26.989 05:15:04 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:27.246 05:15:04 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:27.812 05:15:04 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:27.812 05:15:04 -- common/autotest_common.sh@1184 -- # local i=0 00:19:27.812 05:15:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.812 05:15:04 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:27.812 05:15:04 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:27.812 05:15:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:29.711 05:15:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:29.711 05:15:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:29.711 05:15:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.711 05:15:06 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:29.711 05:15:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.711 05:15:06 -- common/autotest_common.sh@1194 -- # return 0 00:19:29.711 05:15:06 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:29.968 [global] 00:19:29.968 thread=1 00:19:29.968 invalidate=1 00:19:29.968 rw=write 00:19:29.968 time_based=1 00:19:29.968 runtime=1 00:19:29.968 ioengine=libaio 00:19:29.968 direct=1 00:19:29.968 bs=4096 00:19:29.968 iodepth=1 00:19:29.968 norandommap=0 00:19:29.968 numjobs=1 00:19:29.968 00:19:29.968 verify_dump=1 00:19:29.968 verify_backlog=512 00:19:29.968 verify_state_save=0 00:19:29.968 do_verify=1 00:19:29.968 verify=crc32c-intel 00:19:29.968 [job0] 00:19:29.968 filename=/dev/nvme0n1 00:19:29.968 [job1] 00:19:29.968 filename=/dev/nvme0n2 00:19:29.968 [job2] 00:19:29.968 filename=/dev/nvme0n3 00:19:29.968 [job3] 00:19:29.968 filename=/dev/nvme0n4 00:19:29.968 Could not set queue depth (nvme0n1) 00:19:29.968 Could not set queue depth (nvme0n2) 00:19:29.968 Could not set queue depth (nvme0n3) 00:19:29.968 Could not set queue depth (nvme0n4) 00:19:29.968 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.968 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.968 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.968 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.968 fio-3.35 00:19:29.968 Starting 4 threads 00:19:31.341 00:19:31.341 job0: (groupid=0, jobs=1): err= 0: pid=1898116: Wed Apr 24 05:15:08 2024 00:19:31.341 read: IOPS=108, BW=433KiB/s (443kB/s)(448KiB/1035msec) 00:19:31.341 slat (nsec): min=5403, max=35176, avg=11247.28, stdev=5720.43 00:19:31.341 clat (usec): min=321, max=42314, avg=7922.40, stdev=15749.00 00:19:31.341 lat (usec): min=345, max=42328, avg=7933.64, stdev=15751.75 00:19:31.341 clat percentiles (usec): 00:19:31.341 | 1.00th=[ 330], 5.00th=[ 359], 10.00th=[ 420], 20.00th=[ 474], 00:19:31.341 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 506], 60.00th=[ 529], 00:19:31.341 | 70.00th=[ 545], 80.00th=[ 693], 90.00th=[41157], 95.00th=[42206], 00:19:31.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:31.341 | 99.99th=[42206] 00:19:31.341 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:31.341 slat (nsec): min=6514, max=34718, avg=8180.56, stdev=2375.97 00:19:31.341 clat (usec): min=195, max=529, avg=267.92, stdev=30.89 00:19:31.341 lat (usec): min=202, max=555, avg=276.10, stdev=31.18 00:19:31.341 clat percentiles (usec): 00:19:31.341 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 249], 00:19:31.341 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:19:31.341 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:19:31.341 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 529], 99.95th=[ 529], 00:19:31.341 | 99.99th=[ 529] 00:19:31.341 bw ( KiB/s): min= 4096, max= 4096, per=27.67%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.341 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.341 lat (usec) : 250=17.15%, 500=72.76%, 750=6.57% 00:19:31.341 lat (msec) : 4=0.16%, 20=0.16%, 50=3.21% 00:19:31.341 cpu : usr=0.00%, sys=0.77%, ctx=625, majf=0, minf=1 00:19:31.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.341 issued rwts: total=112,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.341 job1: (groupid=0, jobs=1): err= 0: pid=1898135: Wed Apr 24 05:15:08 2024 00:19:31.341 read: IOPS=1079, BW=4320KiB/s (4423kB/s)(4324KiB/1001msec) 00:19:31.342 slat (nsec): min=6409, max=26757, avg=7323.40, stdev=1286.36 00:19:31.342 clat (usec): min=279, max=41013, avg=578.91, stdev=3255.40 00:19:31.342 lat (usec): min=286, max=41027, avg=586.23, stdev=3255.76 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:19:31.342 | 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:19:31.342 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 347], 00:19:31.342 | 99.00th=[ 429], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:19:31.342 | 99.99th=[41157] 00:19:31.342 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:31.342 slat (nsec): min=8618, max=99915, avg=9781.79, stdev=2637.83 00:19:31.342 clat (usec): min=177, max=1029, avg=222.69, stdev=42.66 00:19:31.342 lat (usec): min=187, max=1038, avg=232.47, stdev=43.06 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:19:31.342 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:19:31.342 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 293], 95.00th=[ 318], 00:19:31.342 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 396], 99.95th=[ 1029], 00:19:31.342 | 99.99th=[ 1029] 00:19:31.342 bw ( KiB/s): min= 8192, max= 8192, per=55.33%, avg=8192.00, stdev= 0.00, samples=1 00:19:31.342 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:31.342 lat (usec) : 250=49.90%, 500=49.79% 00:19:31.342 lat (msec) : 2=0.04%, 50=0.27% 00:19:31.342 cpu : usr=1.70%, sys=3.10%, ctx=2619, majf=0, minf=1 00:19:31.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 issued rwts: total=1081,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.342 job2: (groupid=0, jobs=1): err= 0: pid=1898150: Wed Apr 24 05:15:08 2024 00:19:31.342 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:31.342 slat (nsec): min=5627, max=73664, avg=19421.73, stdev=9452.12 00:19:31.342 clat (usec): min=313, max=41032, avg=623.14, stdev=2620.77 00:19:31.342 lat (usec): min=329, max=41050, avg=642.56, stdev=2620.54 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[ 330], 5.00th=[ 359], 10.00th=[ 379], 20.00th=[ 400], 00:19:31.342 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 457], 00:19:31.342 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 523], 00:19:31.342 | 99.00th=[ 652], 99.50th=[ 898], 99.90th=[41157], 99.95th=[41157], 00:19:31.342 | 99.99th=[41157] 00:19:31.342 write: IOPS=1280, BW=5123KiB/s (5246kB/s)(5128KiB/1001msec); 0 zone resets 00:19:31.342 slat (nsec): min=6002, max=57644, avg=10036.45, stdev=5023.98 00:19:31.342 clat (usec): min=191, max=942, avg=247.02, stdev=44.54 00:19:31.342 lat (usec): min=200, max=951, avg=257.06, stdev=45.37 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:19:31.342 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 253], 00:19:31.342 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:19:31.342 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 668], 99.95th=[ 947], 00:19:31.342 | 99.99th=[ 947] 00:19:31.342 bw ( KiB/s): min= 4096, max= 4096, per=27.67%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.342 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.342 lat (usec) : 250=32.09%, 500=61.49%, 750=6.03%, 1000=0.17% 00:19:31.342 lat (msec) : 50=0.22% 00:19:31.342 cpu : usr=1.60%, sys=3.50%, ctx=2307, majf=0, minf=2 00:19:31.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 issued rwts: total=1024,1282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.342 job3: (groupid=0, jobs=1): err= 0: pid=1898151: Wed Apr 24 05:15:08 2024 00:19:31.342 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:19:31.342 slat (nsec): min=7830, max=35322, avg=17426.00, stdev=6345.90 00:19:31.342 clat (usec): min=40862, max=44044, avg=41403.44, stdev=745.56 00:19:31.342 lat (usec): min=40898, max=44060, avg=41420.86, stdev=744.20 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:31.342 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:31.342 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:31.342 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:19:31.342 | 99.99th=[44303] 00:19:31.342 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:19:31.342 slat (nsec): min=7282, max=39750, avg=8884.12, stdev=2164.53 00:19:31.342 clat (usec): min=181, max=506, avg=235.04, stdev=23.00 00:19:31.342 lat (usec): min=190, max=522, avg=243.93, stdev=23.35 00:19:31.342 clat percentiles (usec): 00:19:31.342 | 1.00th=[ 190], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 223], 00:19:31.342 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:19:31.342 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:19:31.342 | 99.00th=[ 289], 99.50th=[ 363], 99.90th=[ 506], 99.95th=[ 506], 00:19:31.342 | 99.99th=[ 506] 00:19:31.342 bw ( KiB/s): min= 4096, max= 4096, per=27.67%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.342 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.342 lat (usec) : 250=82.58%, 500=13.11%, 750=0.19% 00:19:31.342 lat (msec) : 50=4.12% 00:19:31.342 cpu : usr=0.68%, sys=0.19%, ctx=534, majf=0, minf=1 00:19:31.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.342 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.342 00:19:31.342 Run status group 0 (all jobs): 00:19:31.342 READ: bw=8628KiB/s (8835kB/s), 84.8KiB/s-4320KiB/s (86.8kB/s-4423kB/s), io=8956KiB (9171kB), run=1001-1038msec 00:19:31.342 WRITE: bw=14.5MiB/s (15.2MB/s), 1973KiB/s-6138KiB/s (2020kB/s-6285kB/s), io=15.0MiB (15.7MB), run=1001-1038msec 00:19:31.342 00:19:31.342 Disk stats (read/write): 00:19:31.342 nvme0n1: ios=156/512, merge=0/0, ticks=1444/135, in_queue=1579, util=84.67% 00:19:31.342 nvme0n2: ios=988/1024, merge=0/0, ticks=1021/235, in_queue=1256, util=88.59% 00:19:31.342 nvme0n3: ios=889/1024, merge=0/0, ticks=1346/242, in_queue=1588, util=92.65% 00:19:31.342 nvme0n4: ios=74/512, merge=0/0, ticks=784/119, in_queue=903, util=95.76% 00:19:31.342 05:15:08 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:31.342 [global] 00:19:31.342 thread=1 00:19:31.342 invalidate=1 00:19:31.342 rw=randwrite 00:19:31.342 time_based=1 00:19:31.342 runtime=1 00:19:31.342 ioengine=libaio 00:19:31.342 direct=1 00:19:31.342 bs=4096 00:19:31.342 iodepth=1 00:19:31.342 norandommap=0 00:19:31.342 numjobs=1 00:19:31.342 00:19:31.342 verify_dump=1 00:19:31.342 verify_backlog=512 00:19:31.342 verify_state_save=0 00:19:31.342 do_verify=1 00:19:31.342 verify=crc32c-intel 00:19:31.342 [job0] 00:19:31.342 filename=/dev/nvme0n1 00:19:31.342 [job1] 00:19:31.342 filename=/dev/nvme0n2 00:19:31.342 [job2] 00:19:31.342 filename=/dev/nvme0n3 00:19:31.342 [job3] 00:19:31.342 filename=/dev/nvme0n4 00:19:31.342 Could not set queue depth (nvme0n1) 00:19:31.342 Could not set queue depth (nvme0n2) 00:19:31.342 Could not set queue depth (nvme0n3) 00:19:31.342 Could not set queue depth (nvme0n4) 00:19:31.601 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:31.601 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:31.601 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:31.601 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:31.601 fio-3.35 00:19:31.601 Starting 4 threads 00:19:32.975 00:19:32.975 job0: (groupid=0, jobs=1): err= 0: pid=1898709: Wed Apr 24 05:15:09 2024 00:19:32.975 read: IOPS=241, BW=964KiB/s (987kB/s)(1000KiB/1037msec) 00:19:32.975 slat (nsec): min=5376, max=45954, avg=14743.97, stdev=7605.56 00:19:32.975 clat (usec): min=306, max=41012, avg=3611.28, stdev=10960.88 00:19:32.975 lat (usec): min=322, max=41029, avg=3626.02, stdev=10961.03 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:19:32.975 | 30.00th=[ 334], 40.00th=[ 359], 50.00th=[ 388], 60.00th=[ 441], 00:19:32.975 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 482], 95.00th=[41157], 00:19:32.975 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:32.975 | 99.99th=[41157] 00:19:32.975 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:19:32.975 slat (nsec): min=5825, max=39236, avg=8086.18, stdev=3656.15 00:19:32.975 clat (usec): min=180, max=405, avg=241.31, stdev=38.94 00:19:32.975 lat (usec): min=186, max=431, avg=249.40, stdev=39.66 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:19:32.975 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:19:32.975 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 334], 00:19:32.975 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 408], 99.95th=[ 408], 00:19:32.975 | 99.99th=[ 408] 00:19:32.975 bw ( KiB/s): min= 4096, max= 4096, per=31.08%, avg=4096.00, stdev= 0.00, samples=1 00:19:32.975 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:32.975 lat (usec) : 250=50.66%, 500=46.59%, 750=0.13% 00:19:32.975 lat (msec) : 50=2.62% 00:19:32.975 cpu : usr=0.39%, sys=0.77%, ctx=762, majf=0, minf=2 00:19:32.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.975 issued rwts: total=250,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.975 job1: (groupid=0, jobs=1): err= 0: pid=1898712: Wed Apr 24 05:15:09 2024 00:19:32.975 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:19:32.975 slat (nsec): min=8648, max=14536, avg=13628.62, stdev=1166.30 00:19:32.975 clat (usec): min=40740, max=41995, avg=41076.50, stdev=302.12 00:19:32.975 lat (usec): min=40754, max=42009, avg=41090.13, stdev=302.20 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:32.975 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:32.975 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:32.975 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:32.975 | 99.99th=[42206] 00:19:32.975 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:32.975 slat (usec): min=8, max=22892, avg=54.08, stdev=1011.29 00:19:32.975 clat (usec): min=168, max=2090, avg=268.02, stdev=105.00 00:19:32.975 lat (usec): min=176, max=23198, avg=322.10, stdev=1018.42 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 206], 20.00th=[ 233], 00:19:32.975 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 253], 00:19:32.975 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 371], 95.00th=[ 375], 00:19:32.975 | 99.00th=[ 449], 99.50th=[ 734], 99.90th=[ 2089], 99.95th=[ 2089], 00:19:32.975 | 99.99th=[ 2089] 00:19:32.975 bw ( KiB/s): min= 4096, max= 4096, per=31.08%, avg=4096.00, stdev= 0.00, samples=1 00:19:32.975 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:32.975 lat (usec) : 250=55.16%, 500=40.15%, 750=0.38%, 1000=0.19% 00:19:32.975 lat (msec) : 4=0.19%, 50=3.94% 00:19:32.975 cpu : usr=0.19%, sys=0.68%, ctx=536, majf=0, minf=1 00:19:32.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.975 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.975 job2: (groupid=0, jobs=1): err= 0: pid=1898717: Wed Apr 24 05:15:09 2024 00:19:32.975 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:32.975 slat (nsec): min=5439, max=58042, avg=10511.69, stdev=5272.85 00:19:32.975 clat (usec): min=272, max=671, avg=330.56, stdev=44.63 00:19:32.975 lat (usec): min=278, max=678, avg=341.08, stdev=45.43 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:19:32.975 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:19:32.975 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 375], 95.00th=[ 457], 00:19:32.975 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 578], 99.95th=[ 676], 00:19:32.975 | 99.99th=[ 676] 00:19:32.975 write: IOPS=1879, BW=7516KiB/s (7697kB/s)(7524KiB/1001msec); 0 zone resets 00:19:32.975 slat (nsec): min=5848, max=55864, avg=13436.05, stdev=6358.55 00:19:32.975 clat (usec): min=174, max=465, avg=233.56, stdev=41.15 00:19:32.975 lat (usec): min=181, max=475, avg=247.00, stdev=40.65 00:19:32.975 clat percentiles (usec): 00:19:32.975 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:19:32.975 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:19:32.975 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 314], 00:19:32.975 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 457], 99.95th=[ 465], 00:19:32.975 | 99.99th=[ 465] 00:19:32.975 bw ( KiB/s): min= 8192, max= 8192, per=62.15%, avg=8192.00, stdev= 0.00, samples=1 00:19:32.975 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:32.975 lat (usec) : 250=45.30%, 500=54.40%, 750=0.29% 00:19:32.975 cpu : usr=4.00%, sys=4.80%, ctx=3417, majf=0, minf=1 00:19:32.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.976 issued rwts: total=1536,1881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.976 job3: (groupid=0, jobs=1): err= 0: pid=1898720: Wed Apr 24 05:15:09 2024 00:19:32.976 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:19:32.976 slat (nsec): min=6776, max=17955, avg=15358.36, stdev=2664.91 00:19:32.976 clat (usec): min=40513, max=41012, avg=40959.30, stdev=100.43 00:19:32.976 lat (usec): min=40520, max=41030, avg=40974.65, stdev=102.39 00:19:32.976 clat percentiles (usec): 00:19:32.976 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:32.976 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:32.976 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:32.976 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:32.976 | 99.99th=[41157] 00:19:32.976 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:19:32.976 slat (nsec): min=5779, max=39115, avg=7404.48, stdev=2852.31 00:19:32.976 clat (usec): min=197, max=346, avg=229.16, stdev=15.61 00:19:32.976 lat (usec): min=203, max=385, avg=236.56, stdev=16.18 00:19:32.976 clat percentiles (usec): 00:19:32.976 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:19:32.976 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:19:32.976 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 255], 00:19:32.976 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 347], 99.95th=[ 347], 00:19:32.976 | 99.99th=[ 347] 00:19:32.976 bw ( KiB/s): min= 4096, max= 4096, per=31.08%, avg=4096.00, stdev= 0.00, samples=1 00:19:32.976 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:32.976 lat (usec) : 250=88.39%, 500=7.49% 00:19:32.976 lat (msec) : 50=4.12% 00:19:32.976 cpu : usr=0.39%, sys=0.20%, ctx=534, majf=0, minf=1 00:19:32.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.976 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.976 00:19:32.976 Run status group 0 (all jobs): 00:19:32.976 READ: bw=7055KiB/s (7224kB/s), 81.4KiB/s-6138KiB/s (83.3kB/s-6285kB/s), io=7316KiB (7492kB), run=1001-1037msec 00:19:32.976 WRITE: bw=12.9MiB/s (13.5MB/s), 1975KiB/s-7516KiB/s (2022kB/s-7697kB/s), io=13.3MiB (14.0MB), run=1001-1037msec 00:19:32.976 00:19:32.976 Disk stats (read/write): 00:19:32.976 nvme0n1: ios=295/512, merge=0/0, ticks=730/116, in_queue=846, util=86.97% 00:19:32.976 nvme0n2: ios=67/512, merge=0/0, ticks=1048/130, in_queue=1178, util=100.00% 00:19:32.976 nvme0n3: ios=1340/1536, merge=0/0, ticks=438/351, in_queue=789, util=88.91% 00:19:32.976 nvme0n4: ios=64/512, merge=0/0, ticks=813/118, in_queue=931, util=95.46% 00:19:32.976 05:15:09 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:32.976 [global] 00:19:32.976 thread=1 00:19:32.976 invalidate=1 00:19:32.976 rw=write 00:19:32.976 time_based=1 00:19:32.976 runtime=1 00:19:32.976 ioengine=libaio 00:19:32.976 direct=1 00:19:32.976 bs=4096 00:19:32.976 iodepth=128 00:19:32.976 norandommap=0 00:19:32.976 numjobs=1 00:19:32.976 00:19:32.976 verify_dump=1 00:19:32.976 verify_backlog=512 00:19:32.976 verify_state_save=0 00:19:32.976 do_verify=1 00:19:32.976 verify=crc32c-intel 00:19:32.976 [job0] 00:19:32.976 filename=/dev/nvme0n1 00:19:32.976 [job1] 00:19:32.976 filename=/dev/nvme0n2 00:19:32.976 [job2] 00:19:32.976 filename=/dev/nvme0n3 00:19:32.976 [job3] 00:19:32.976 filename=/dev/nvme0n4 00:19:32.976 Could not set queue depth (nvme0n1) 00:19:32.976 Could not set queue depth (nvme0n2) 00:19:32.976 Could not set queue depth (nvme0n3) 00:19:32.976 Could not set queue depth (nvme0n4) 00:19:32.976 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:32.976 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:32.976 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:32.976 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:32.976 fio-3.35 00:19:32.976 Starting 4 threads 00:19:34.367 00:19:34.367 job0: (groupid=0, jobs=1): err= 0: pid=1898961: Wed Apr 24 05:15:11 2024 00:19:34.367 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:19:34.367 slat (usec): min=2, max=9693, avg=99.78, stdev=549.61 00:19:34.367 clat (usec): min=5215, max=54305, avg=13154.05, stdev=2602.01 00:19:34.367 lat (usec): min=5221, max=54309, avg=13253.83, stdev=2623.73 00:19:34.367 clat percentiles (usec): 00:19:34.367 | 1.00th=[ 8225], 5.00th=[10159], 10.00th=[10814], 20.00th=[11731], 00:19:34.367 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:19:34.367 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15926], 95.00th=[17695], 00:19:34.367 | 99.00th=[22938], 99.50th=[23462], 99.90th=[31589], 99.95th=[31589], 00:19:34.367 | 99.99th=[54264] 00:19:34.367 write: IOPS=4749, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1007msec); 0 zone resets 00:19:34.367 slat (usec): min=3, max=17837, avg=93.98, stdev=540.33 00:19:34.367 clat (usec): min=1120, max=57759, avg=14060.41, stdev=7958.29 00:19:34.367 lat (usec): min=1130, max=57766, avg=14154.39, stdev=7995.72 00:19:34.367 clat percentiles (usec): 00:19:34.367 | 1.00th=[ 5080], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[10290], 00:19:34.367 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:19:34.367 | 70.00th=[12649], 80.00th=[14484], 90.00th=[18744], 95.00th=[36439], 00:19:34.367 | 99.00th=[47449], 99.50th=[49021], 99.90th=[57934], 99.95th=[57934], 00:19:34.367 | 99.99th=[57934] 00:19:34.367 bw ( KiB/s): min=17104, max=20136, per=28.65%, avg=18620.00, stdev=2143.95, samples=2 00:19:34.367 iops : min= 4276, max= 5034, avg=4655.00, stdev=535.99, samples=2 00:19:34.367 lat (msec) : 2=0.19%, 4=0.23%, 10=10.99%, 20=82.59%, 50=5.85% 00:19:34.367 lat (msec) : 100=0.15% 00:19:34.367 cpu : usr=6.66%, sys=8.95%, ctx=532, majf=0, minf=1 00:19:34.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:34.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.367 issued rwts: total=4608,4783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.367 job1: (groupid=0, jobs=1): err= 0: pid=1898965: Wed Apr 24 05:15:11 2024 00:19:34.367 read: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.8MiB/1048msec) 00:19:34.367 slat (usec): min=2, max=26593, avg=174.95, stdev=989.62 00:19:34.367 clat (usec): min=9981, max=91921, avg=24100.72, stdev=13946.16 00:19:34.367 lat (usec): min=9985, max=91933, avg=24275.67, stdev=14001.94 00:19:34.367 clat percentiles (usec): 00:19:34.367 | 1.00th=[10552], 5.00th=[13173], 10.00th=[13829], 20.00th=[15139], 00:19:34.367 | 30.00th=[16450], 40.00th=[17957], 50.00th=[19792], 60.00th=[20841], 00:19:34.367 | 70.00th=[22938], 80.00th=[27395], 90.00th=[46400], 95.00th=[53740], 00:19:34.367 | 99.00th=[79168], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:19:34.367 | 99.99th=[91751] 00:19:34.367 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1048msec); 0 zone resets 00:19:34.367 slat (usec): min=3, max=12990, avg=143.78, stdev=799.42 00:19:34.367 clat (msec): min=9, max=112, avg=18.09, stdev=13.53 00:19:34.367 lat (msec): min=10, max=112, avg=18.24, stdev=13.63 00:19:34.367 clat percentiles (msec): 00:19:34.367 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:19:34.367 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:19:34.367 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 22], 95.00th=[ 28], 00:19:34.367 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 112], 99.95th=[ 112], 00:19:34.367 | 99.99th=[ 113] 00:19:34.367 bw ( KiB/s): min=12288, max=12288, per=18.91%, avg=12288.00, stdev= 0.00, samples=2 00:19:34.367 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:34.367 lat (msec) : 10=0.16%, 20=69.21%, 50=24.82%, 100=5.56%, 250=0.25% 00:19:34.367 cpu : usr=4.11%, sys=6.21%, ctx=370, majf=0, minf=1 00:19:34.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:34.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.367 issued rwts: total=3027,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.367 job2: (groupid=0, jobs=1): err= 0: pid=1898966: Wed Apr 24 05:15:11 2024 00:19:34.367 read: IOPS=4273, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1004msec) 00:19:34.367 slat (usec): min=3, max=13859, avg=110.99, stdev=820.80 00:19:34.367 clat (usec): min=3463, max=29125, avg=14665.06, stdev=2900.51 00:19:34.367 lat (usec): min=3474, max=29140, avg=14776.05, stdev=2960.15 00:19:34.367 clat percentiles (usec): 00:19:34.367 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11600], 20.00th=[12649], 00:19:34.367 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[14746], 00:19:34.367 | 70.00th=[15270], 80.00th=[16188], 90.00th=[17957], 95.00th=[20317], 00:19:34.367 | 99.00th=[23987], 99.50th=[24773], 99.90th=[26084], 99.95th=[28181], 00:19:34.367 | 99.99th=[29230] 00:19:34.367 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:19:34.367 slat (usec): min=4, max=14031, avg=98.12, stdev=694.73 00:19:34.367 clat (usec): min=1016, max=51067, avg=13954.23, stdev=6447.06 00:19:34.367 lat (usec): min=1025, max=51075, avg=14052.35, stdev=6490.67 00:19:34.367 clat percentiles (usec): 00:19:34.367 | 1.00th=[ 5080], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[11207], 00:19:34.367 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:19:34.367 | 70.00th=[13960], 80.00th=[14353], 90.00th=[19006], 95.00th=[24773], 00:19:34.367 | 99.00th=[45351], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:19:34.367 | 99.99th=[51119] 00:19:34.368 bw ( KiB/s): min=16432, max=20432, per=28.37%, avg=18432.00, stdev=2828.43, samples=2 00:19:34.368 iops : min= 4108, max= 5108, avg=4608.00, stdev=707.11, samples=2 00:19:34.368 lat (msec) : 2=0.03%, 4=0.12%, 10=10.12%, 20=84.05%, 50=5.60% 00:19:34.368 lat (msec) : 100=0.07% 00:19:34.368 cpu : usr=6.78%, sys=8.47%, ctx=372, majf=0, minf=1 00:19:34.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:34.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.368 issued rwts: total=4291,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.368 job3: (groupid=0, jobs=1): err= 0: pid=1898967: Wed Apr 24 05:15:11 2024 00:19:34.368 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:19:34.368 slat (usec): min=3, max=14772, avg=117.99, stdev=747.11 00:19:34.368 clat (usec): min=4203, max=29210, avg=15077.15, stdev=2614.00 00:19:34.368 lat (usec): min=4210, max=29224, avg=15195.15, stdev=2665.37 00:19:34.368 clat percentiles (usec): 00:19:34.368 | 1.00th=[ 9241], 5.00th=[12256], 10.00th=[12518], 20.00th=[13042], 00:19:34.368 | 30.00th=[13698], 40.00th=[14091], 50.00th=[15139], 60.00th=[15533], 00:19:34.368 | 70.00th=[15926], 80.00th=[16581], 90.00th=[17957], 95.00th=[20055], 00:19:34.368 | 99.00th=[23987], 99.50th=[24249], 99.90th=[25822], 99.95th=[25822], 00:19:34.368 | 99.99th=[29230] 00:19:34.368 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1005msec); 0 zone resets 00:19:34.368 slat (usec): min=4, max=21667, avg=102.48, stdev=594.42 00:19:34.368 clat (usec): min=1169, max=44827, avg=14372.67, stdev=5422.13 00:19:34.368 lat (usec): min=1192, max=44835, avg=14475.15, stdev=5445.09 00:19:34.368 clat percentiles (usec): 00:19:34.368 | 1.00th=[ 4948], 5.00th=[ 7767], 10.00th=[10290], 20.00th=[11600], 00:19:34.368 | 30.00th=[12387], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:19:34.368 | 70.00th=[14615], 80.00th=[15139], 90.00th=[17433], 95.00th=[25560], 00:19:34.368 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:19:34.368 | 99.99th=[44827] 00:19:34.368 bw ( KiB/s): min=16384, max=19096, per=27.30%, avg=17740.00, stdev=1917.67, samples=2 00:19:34.368 iops : min= 4096, max= 4774, avg=4435.00, stdev=479.42, samples=2 00:19:34.368 lat (msec) : 2=0.01%, 4=0.18%, 10=5.07%, 20=88.65%, 50=6.09% 00:19:34.368 cpu : usr=5.58%, sys=10.26%, ctx=476, majf=0, minf=1 00:19:34.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:34.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.368 issued rwts: total=4096,4562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.368 00:19:34.368 Run status group 0 (all jobs): 00:19:34.368 READ: bw=59.7MiB/s (62.6MB/s), 11.3MiB/s-17.9MiB/s (11.8MB/s-18.7MB/s), io=62.6MiB (65.6MB), run=1004-1048msec 00:19:34.368 WRITE: bw=63.5MiB/s (66.5MB/s), 11.5MiB/s-18.6MiB/s (12.0MB/s-19.5MB/s), io=66.5MiB (69.7MB), run=1004-1048msec 00:19:34.368 00:19:34.368 Disk stats (read/write): 00:19:34.368 nvme0n1: ios=4141/4100, merge=0/0, ticks=24373/30140, in_queue=54513, util=85.97% 00:19:34.368 nvme0n2: ios=2560/2885, merge=0/0, ticks=14154/13768, in_queue=27922, util=85.93% 00:19:34.368 nvme0n3: ios=3641/3743, merge=0/0, ticks=48441/46428, in_queue=94869, util=97.16% 00:19:34.368 nvme0n4: ios=3563/3584, merge=0/0, ticks=28786/29429, in_queue=58215, util=100.00% 00:19:34.368 05:15:11 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:34.368 [global] 00:19:34.368 thread=1 00:19:34.368 invalidate=1 00:19:34.368 rw=randwrite 00:19:34.368 time_based=1 00:19:34.368 runtime=1 00:19:34.368 ioengine=libaio 00:19:34.368 direct=1 00:19:34.368 bs=4096 00:19:34.368 iodepth=128 00:19:34.368 norandommap=0 00:19:34.368 numjobs=1 00:19:34.368 00:19:34.368 verify_dump=1 00:19:34.368 verify_backlog=512 00:19:34.368 verify_state_save=0 00:19:34.368 do_verify=1 00:19:34.368 verify=crc32c-intel 00:19:34.368 [job0] 00:19:34.368 filename=/dev/nvme0n1 00:19:34.368 [job1] 00:19:34.368 filename=/dev/nvme0n2 00:19:34.368 [job2] 00:19:34.368 filename=/dev/nvme0n3 00:19:34.368 [job3] 00:19:34.368 filename=/dev/nvme0n4 00:19:34.368 Could not set queue depth (nvme0n1) 00:19:34.368 Could not set queue depth (nvme0n2) 00:19:34.368 Could not set queue depth (nvme0n3) 00:19:34.368 Could not set queue depth (nvme0n4) 00:19:34.627 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:34.627 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:34.627 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:34.627 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:34.627 fio-3.35 00:19:34.627 Starting 4 threads 00:19:36.008 00:19:36.008 job0: (groupid=0, jobs=1): err= 0: pid=1899316: Wed Apr 24 05:15:12 2024 00:19:36.008 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:19:36.008 slat (usec): min=2, max=20036, avg=95.38, stdev=595.99 00:19:36.008 clat (usec): min=2053, max=45980, avg=13305.29, stdev=5417.09 00:19:36.008 lat (usec): min=2069, max=45983, avg=13400.67, stdev=5413.29 00:19:36.008 clat percentiles (usec): 00:19:36.008 | 1.00th=[ 3359], 5.00th=[ 5080], 10.00th=[ 7111], 20.00th=[11076], 00:19:36.008 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:19:36.008 | 70.00th=[13435], 80.00th=[13829], 90.00th=[17171], 95.00th=[22676], 00:19:36.008 | 99.00th=[38011], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:19:36.008 | 99.99th=[45876] 00:19:36.008 write: IOPS=5051, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:19:36.008 slat (usec): min=3, max=23159, avg=90.76, stdev=696.82 00:19:36.008 clat (usec): min=355, max=46045, avg=12914.32, stdev=5362.28 00:19:36.008 lat (usec): min=423, max=46050, avg=13005.08, stdev=5399.72 00:19:36.008 clat percentiles (usec): 00:19:36.008 | 1.00th=[ 1532], 5.00th=[ 4047], 10.00th=[ 7308], 20.00th=[10290], 00:19:36.008 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12387], 60.00th=[12911], 00:19:36.008 | 70.00th=[13566], 80.00th=[14484], 90.00th=[20841], 95.00th=[25297], 00:19:36.008 | 99.00th=[29754], 99.50th=[29754], 99.90th=[36963], 99.95th=[36963], 00:19:36.008 | 99.99th=[45876] 00:19:36.008 bw ( KiB/s): min=19720, max=19800, per=29.45%, avg=19760.00, stdev=56.57, samples=2 00:19:36.008 iops : min= 4930, max= 4950, avg=4940.00, stdev=14.14, samples=2 00:19:36.008 lat (usec) : 500=0.01%, 750=0.01% 00:19:36.008 lat (msec) : 2=0.65%, 4=2.38%, 10=13.27%, 20=74.76%, 50=8.92% 00:19:36.008 cpu : usr=4.79%, sys=6.89%, ctx=415, majf=0, minf=1 00:19:36.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.008 issued rwts: total=4608,5067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.008 job1: (groupid=0, jobs=1): err= 0: pid=1899317: Wed Apr 24 05:15:12 2024 00:19:36.008 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:19:36.008 slat (usec): min=2, max=13546, avg=127.88, stdev=774.43 00:19:36.008 clat (usec): min=2405, max=42528, avg=16648.09, stdev=6655.65 00:19:36.008 lat (usec): min=2415, max=42533, avg=16775.97, stdev=6689.16 00:19:36.008 clat percentiles (usec): 00:19:36.008 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[12780], 00:19:36.008 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14222], 60.00th=[14746], 00:19:36.008 | 70.00th=[16450], 80.00th=[20579], 90.00th=[27395], 95.00th=[31327], 00:19:36.008 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:19:36.008 | 99.99th=[42730] 00:19:36.008 write: IOPS=4084, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:36.008 slat (usec): min=3, max=8912, avg=110.00, stdev=635.10 00:19:36.008 clat (usec): min=2379, max=24830, avg=14273.77, stdev=3223.75 00:19:36.008 lat (usec): min=2397, max=24864, avg=14383.77, stdev=3226.61 00:19:36.008 clat percentiles (usec): 00:19:36.008 | 1.00th=[ 6128], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[12256], 00:19:36.008 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14222], 60.00th=[14746], 00:19:36.008 | 70.00th=[15533], 80.00th=[17171], 90.00th=[17695], 95.00th=[19530], 00:19:36.008 | 99.00th=[23200], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:19:36.008 | 99.99th=[24773] 00:19:36.008 bw ( KiB/s): min=12288, max=20480, per=24.41%, avg=16384.00, stdev=5792.62, samples=2 00:19:36.008 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:19:36.008 lat (msec) : 4=0.22%, 10=7.81%, 20=79.74%, 50=12.23% 00:19:36.008 cpu : usr=2.69%, sys=4.89%, ctx=344, majf=0, minf=1 00:19:36.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.008 issued rwts: total=4096,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.008 job2: (groupid=0, jobs=1): err= 0: pid=1899320: Wed Apr 24 05:15:12 2024 00:19:36.008 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1003msec) 00:19:36.008 slat (usec): min=3, max=14504, avg=142.31, stdev=934.84 00:19:36.008 clat (usec): min=2498, max=49331, avg=18138.00, stdev=6515.15 00:19:36.009 lat (usec): min=2513, max=49347, avg=18280.30, stdev=6559.28 00:19:36.009 clat percentiles (usec): 00:19:36.009 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[12780], 20.00th=[14091], 00:19:36.009 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16319], 60.00th=[17433], 00:19:36.009 | 70.00th=[19530], 80.00th=[20841], 90.00th=[25035], 95.00th=[29230], 00:19:36.009 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:19:36.009 | 99.99th=[49546] 00:19:36.009 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:36.009 slat (usec): min=4, max=18683, avg=130.55, stdev=858.99 00:19:36.009 clat (usec): min=1776, max=53566, avg=17966.10, stdev=9014.03 00:19:36.009 lat (usec): min=1783, max=53591, avg=18096.64, stdev=9071.40 00:19:36.009 clat percentiles (usec): 00:19:36.009 | 1.00th=[ 6783], 5.00th=[10028], 10.00th=[11731], 20.00th=[12518], 00:19:36.009 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14877], 60.00th=[15664], 00:19:36.009 | 70.00th=[17695], 80.00th=[21890], 90.00th=[30540], 95.00th=[37487], 00:19:36.009 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:19:36.009 | 99.99th=[53740] 00:19:36.009 bw ( KiB/s): min=13808, max=14864, per=21.36%, avg=14336.00, stdev=746.70, samples=2 00:19:36.009 iops : min= 3452, max= 3716, avg=3584.00, stdev=186.68, samples=2 00:19:36.009 lat (msec) : 2=0.11%, 4=0.17%, 10=3.14%, 20=71.53%, 50=23.69% 00:19:36.009 lat (msec) : 100=1.35% 00:19:36.009 cpu : usr=4.19%, sys=5.99%, ctx=376, majf=0, minf=1 00:19:36.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:36.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.009 issued rwts: total=3456,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.009 job3: (groupid=0, jobs=1): err= 0: pid=1899321: Wed Apr 24 05:15:12 2024 00:19:36.009 read: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:19:36.009 slat (usec): min=2, max=12989, avg=133.09, stdev=882.90 00:19:36.009 clat (usec): min=351, max=32842, avg=16047.28, stdev=4623.30 00:19:36.009 lat (usec): min=4196, max=34785, avg=16180.37, stdev=4676.19 00:19:36.009 clat percentiles (usec): 00:19:36.009 | 1.00th=[ 5800], 5.00th=[11076], 10.00th=[12125], 20.00th=[13304], 00:19:36.009 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:19:36.009 | 70.00th=[17433], 80.00th=[20317], 90.00th=[22676], 95.00th=[25035], 00:19:36.009 | 99.00th=[28181], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:19:36.009 | 99.99th=[32900] 00:19:36.009 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:36.009 slat (usec): min=3, max=15658, avg=109.99, stdev=671.59 00:19:36.009 clat (usec): min=1108, max=44702, avg=15750.55, stdev=6935.06 00:19:36.009 lat (usec): min=1116, max=44749, avg=15860.54, stdev=6995.05 00:19:36.009 clat percentiles (usec): 00:19:36.009 | 1.00th=[ 3490], 5.00th=[ 6456], 10.00th=[ 8356], 20.00th=[12387], 00:19:36.009 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14222], 60.00th=[14746], 00:19:36.009 | 70.00th=[15270], 80.00th=[18220], 90.00th=[28705], 95.00th=[31327], 00:19:36.009 | 99.00th=[34866], 99.50th=[39584], 99.90th=[40109], 99.95th=[42730], 00:19:36.009 | 99.99th=[44827] 00:19:36.009 bw ( KiB/s): min=12944, max=19824, per=24.41%, avg=16384.00, stdev=4864.89, samples=2 00:19:36.009 iops : min= 3236, max= 4956, avg=4096.00, stdev=1216.22, samples=2 00:19:36.009 lat (usec) : 500=0.01% 00:19:36.009 lat (msec) : 2=0.13%, 4=0.61%, 10=7.83%, 20=71.74%, 50=19.68% 00:19:36.009 cpu : usr=4.69%, sys=5.78%, ctx=446, majf=0, minf=1 00:19:36.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:36.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.009 issued rwts: total=3902,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.009 00:19:36.009 Run status group 0 (all jobs): 00:19:36.009 READ: bw=62.5MiB/s (65.5MB/s), 13.5MiB/s-17.9MiB/s (14.1MB/s-18.8MB/s), io=62.7MiB (65.8MB), run=1003-1004msec 00:19:36.009 WRITE: bw=65.5MiB/s (68.7MB/s), 14.0MiB/s-19.7MiB/s (14.6MB/s-20.7MB/s), io=65.8MiB (69.0MB), run=1003-1004msec 00:19:36.009 00:19:36.009 Disk stats (read/write): 00:19:36.009 nvme0n1: ios=4057/4096, merge=0/0, ticks=26863/30726, in_queue=57589, util=98.00% 00:19:36.009 nvme0n2: ios=3619/3631, merge=0/0, ticks=20832/17783, in_queue=38615, util=97.25% 00:19:36.009 nvme0n3: ios=2608/3072, merge=0/0, ticks=39966/45478, in_queue=85444, util=98.11% 00:19:36.009 nvme0n4: ios=3567/3584, merge=0/0, ticks=45138/41225, in_queue=86363, util=96.30% 00:19:36.009 05:15:12 -- target/fio.sh@55 -- # sync 00:19:36.009 05:15:12 -- target/fio.sh@59 -- # fio_pid=1899457 00:19:36.009 05:15:12 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:36.009 05:15:12 -- target/fio.sh@61 -- # sleep 3 00:19:36.009 [global] 00:19:36.009 thread=1 00:19:36.009 invalidate=1 00:19:36.009 rw=read 00:19:36.009 time_based=1 00:19:36.009 runtime=10 00:19:36.009 ioengine=libaio 00:19:36.009 direct=1 00:19:36.009 bs=4096 00:19:36.009 iodepth=1 00:19:36.009 norandommap=1 00:19:36.009 numjobs=1 00:19:36.009 00:19:36.009 [job0] 00:19:36.009 filename=/dev/nvme0n1 00:19:36.009 [job1] 00:19:36.009 filename=/dev/nvme0n2 00:19:36.009 [job2] 00:19:36.009 filename=/dev/nvme0n3 00:19:36.009 [job3] 00:19:36.009 filename=/dev/nvme0n4 00:19:36.009 Could not set queue depth (nvme0n1) 00:19:36.009 Could not set queue depth (nvme0n2) 00:19:36.009 Could not set queue depth (nvme0n3) 00:19:36.009 Could not set queue depth (nvme0n4) 00:19:36.009 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:36.009 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:36.009 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:36.009 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:36.009 fio-3.35 00:19:36.009 Starting 4 threads 00:19:39.288 05:15:15 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:39.288 05:15:16 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:39.288 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8974336, buflen=4096 00:19:39.288 fio: pid=1899549, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:39.288 05:15:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.288 05:15:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:39.288 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6139904, buflen=4096 00:19:39.288 fio: pid=1899548, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:39.546 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=36564992, buflen=4096 00:19:39.546 fio: pid=1899546, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:39.546 05:15:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.546 05:15:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:39.805 05:15:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.805 05:15:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:39.805 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2359296, buflen=4096 00:19:39.805 fio: pid=1899547, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:39.805 00:19:39.805 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1899546: Wed Apr 24 05:15:16 2024 00:19:39.805 read: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(34.9MiB/3441msec) 00:19:39.805 slat (usec): min=5, max=11560, avg=12.90, stdev=160.63 00:19:39.805 clat (usec): min=252, max=41155, avg=367.29, stdev=446.90 00:19:39.805 lat (usec): min=258, max=41162, avg=380.18, stdev=475.25 00:19:39.805 clat percentiles (usec): 00:19:39.805 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 306], 00:19:39.805 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 363], 00:19:39.805 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 469], 95.00th=[ 498], 00:19:39.805 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 775], 99.95th=[ 2024], 00:19:39.805 | 99.99th=[41157] 00:19:39.805 bw ( KiB/s): min= 9336, max=12104, per=72.94%, avg=10364.00, stdev=969.07, samples=6 00:19:39.805 iops : min= 2334, max= 3026, avg=2591.00, stdev=242.27, samples=6 00:19:39.805 lat (usec) : 500=95.46%, 750=4.42%, 1000=0.02% 00:19:39.805 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02%, 50=0.01% 00:19:39.805 cpu : usr=2.06%, sys=4.10%, ctx=8931, majf=0, minf=1 00:19:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 issued rwts: total=8928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.805 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1899547: Wed Apr 24 05:15:16 2024 00:19:39.805 read: IOPS=155, BW=620KiB/s (635kB/s)(2304KiB/3714msec) 00:19:39.805 slat (usec): min=6, max=15942, avg=61.43, stdev=875.76 00:19:39.805 clat (usec): min=287, max=42109, avg=6345.00, stdev=14469.05 00:19:39.805 lat (usec): min=294, max=58051, avg=6406.52, stdev=14624.72 00:19:39.805 clat percentiles (usec): 00:19:39.805 | 1.00th=[ 302], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:19:39.805 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:19:39.805 | 70.00th=[ 388], 80.00th=[ 429], 90.00th=[41157], 95.00th=[41681], 00:19:39.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:39.805 | 99.99th=[42206] 00:19:39.805 bw ( KiB/s): min= 86, max= 3944, per=4.59%, avg=652.29, stdev=1451.63, samples=7 00:19:39.805 iops : min= 21, max= 986, avg=163.00, stdev=362.94, samples=7 00:19:39.805 lat (usec) : 500=84.06%, 750=1.21% 00:19:39.805 lat (msec) : 50=14.56% 00:19:39.805 cpu : usr=0.08%, sys=0.27%, ctx=579, majf=0, minf=1 00:19:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.805 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1899548: Wed Apr 24 05:15:16 2024 00:19:39.805 read: IOPS=468, BW=1873KiB/s (1918kB/s)(5996KiB/3202msec) 00:19:39.805 slat (nsec): min=5897, max=63857, avg=20061.88, stdev=8116.26 00:19:39.805 clat (usec): min=311, max=42097, avg=2096.16, stdev=8080.67 00:19:39.805 lat (usec): min=327, max=42112, avg=2116.23, stdev=8080.77 00:19:39.805 clat percentiles (usec): 00:19:39.805 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 379], 00:19:39.805 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 424], 60.00th=[ 461], 00:19:39.805 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 619], 00:19:39.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:39.805 | 99.99th=[42206] 00:19:39.805 bw ( KiB/s): min= 96, max= 6152, per=14.01%, avg=1992.00, stdev=2940.71, samples=6 00:19:39.805 iops : min= 24, max= 1538, avg=498.00, stdev=735.18, samples=6 00:19:39.805 lat (usec) : 500=84.73%, 750=10.87%, 1000=0.20% 00:19:39.805 lat (msec) : 2=0.07%, 50=4.07% 00:19:39.805 cpu : usr=0.44%, sys=1.56%, ctx=1500, majf=0, minf=1 00:19:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.805 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1899549: Wed Apr 24 05:15:16 2024 00:19:39.805 read: IOPS=744, BW=2978KiB/s (3049kB/s)(8764KiB/2943msec) 00:19:39.805 slat (nsec): min=5720, max=60371, avg=18481.55, stdev=7692.60 00:19:39.805 clat (usec): min=354, max=42101, avg=1315.49, stdev=5821.71 00:19:39.805 lat (usec): min=364, max=42116, avg=1333.97, stdev=5822.41 00:19:39.805 clat percentiles (usec): 00:19:39.805 | 1.00th=[ 363], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:19:39.805 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 474], 00:19:39.805 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:19:39.805 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:39.805 | 99.99th=[42206] 00:19:39.805 bw ( KiB/s): min= 96, max= 8248, per=22.25%, avg=3161.60, stdev=4208.50, samples=5 00:19:39.805 iops : min= 24, max= 2062, avg=790.40, stdev=1052.12, samples=5 00:19:39.805 lat (usec) : 500=71.44%, 750=26.37% 00:19:39.805 lat (msec) : 2=0.05%, 50=2.10% 00:19:39.805 cpu : usr=0.61%, sys=1.67%, ctx=2195, majf=0, minf=1 00:19:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.805 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.805 00:19:39.805 Run status group 0 (all jobs): 00:19:39.805 READ: bw=13.9MiB/s (14.5MB/s), 620KiB/s-10.1MiB/s (635kB/s-10.6MB/s), io=51.5MiB (54.0MB), run=2943-3714msec 00:19:39.805 00:19:39.805 Disk stats (read/write): 00:19:39.805 nvme0n1: ios=8709/0, merge=0/0, ticks=3075/0, in_queue=3075, util=95.42% 00:19:39.805 nvme0n2: ios=573/0, merge=0/0, ticks=3525/0, in_queue=3525, util=95.85% 00:19:39.805 nvme0n3: ios=1497/0, merge=0/0, ticks=3032/0, in_queue=3032, util=96.79% 00:19:39.805 nvme0n4: ios=2239/0, merge=0/0, ticks=3757/0, in_queue=3757, util=99.76% 00:19:40.064 05:15:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:40.064 05:15:17 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:40.322 05:15:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:40.322 05:15:17 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:40.580 05:15:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:40.580 05:15:17 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:40.838 05:15:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:40.838 05:15:17 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:41.096 05:15:18 -- target/fio.sh@69 -- # fio_status=0 00:19:41.096 05:15:18 -- target/fio.sh@70 -- # wait 1899457 00:19:41.096 05:15:18 -- target/fio.sh@70 -- # fio_status=4 00:19:41.096 05:15:18 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:41.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.096 05:15:18 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:41.096 05:15:18 -- common/autotest_common.sh@1205 -- # local i=0 00:19:41.096 05:15:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:41.096 05:15:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.096 05:15:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:41.096 05:15:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.096 05:15:18 -- common/autotest_common.sh@1217 -- # return 0 00:19:41.096 05:15:18 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:41.096 05:15:18 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:41.096 nvmf hotplug test: fio failed as expected 00:19:41.096 05:15:18 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.353 05:15:18 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:41.353 05:15:18 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:41.353 05:15:18 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:41.353 05:15:18 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:41.353 05:15:18 -- target/fio.sh@91 -- # nvmftestfini 00:19:41.353 05:15:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.353 05:15:18 -- nvmf/common.sh@117 -- # sync 00:19:41.353 05:15:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.353 05:15:18 -- nvmf/common.sh@120 -- # set +e 00:19:41.354 05:15:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.354 05:15:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.354 rmmod nvme_tcp 00:19:41.612 rmmod nvme_fabrics 00:19:41.612 rmmod nvme_keyring 00:19:41.612 05:15:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.612 05:15:18 -- nvmf/common.sh@124 -- # set -e 00:19:41.612 05:15:18 -- nvmf/common.sh@125 -- # return 0 00:19:41.612 05:15:18 -- nvmf/common.sh@478 -- # '[' -n 1896814 ']' 00:19:41.612 05:15:18 -- nvmf/common.sh@479 -- # killprocess 1896814 00:19:41.612 05:15:18 -- common/autotest_common.sh@936 -- # '[' -z 1896814 ']' 00:19:41.612 05:15:18 -- common/autotest_common.sh@940 -- # kill -0 1896814 00:19:41.612 05:15:18 -- common/autotest_common.sh@941 -- # uname 00:19:41.612 05:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.612 05:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1896814 00:19:41.612 05:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:41.612 05:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:41.612 05:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1896814' 00:19:41.612 killing process with pid 1896814 00:19:41.612 05:15:18 -- common/autotest_common.sh@955 -- # kill 1896814 00:19:41.612 05:15:18 -- common/autotest_common.sh@960 -- # wait 1896814 00:19:41.871 05:15:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:41.871 05:15:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:41.871 05:15:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:41.871 05:15:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.871 05:15:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.871 05:15:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.871 05:15:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.871 05:15:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.769 05:15:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.769 00:19:43.769 real 0m23.441s 00:19:43.769 user 1m21.169s 00:19:43.769 sys 0m6.719s 00:19:43.769 05:15:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:43.769 05:15:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.769 ************************************ 00:19:43.769 END TEST nvmf_fio_target 00:19:43.769 ************************************ 00:19:43.769 05:15:21 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:43.769 05:15:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:43.769 05:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:43.769 05:15:21 -- common/autotest_common.sh@10 -- # set +x 00:19:44.027 ************************************ 00:19:44.027 START TEST nvmf_bdevio 00:19:44.027 ************************************ 00:19:44.027 05:15:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:44.027 * Looking for test storage... 00:19:44.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.027 05:15:21 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.027 05:15:21 -- nvmf/common.sh@7 -- # uname -s 00:19:44.027 05:15:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.027 05:15:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.027 05:15:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.027 05:15:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.027 05:15:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.027 05:15:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.027 05:15:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.027 05:15:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.027 05:15:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.027 05:15:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.027 05:15:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.027 05:15:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.027 05:15:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.027 05:15:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.027 05:15:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.027 05:15:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.027 05:15:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.027 05:15:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.027 05:15:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.027 05:15:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.027 05:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.027 05:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.027 05:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.027 05:15:21 -- paths/export.sh@5 -- # export PATH 00:19:44.027 05:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.027 05:15:21 -- nvmf/common.sh@47 -- # : 0 00:19:44.027 05:15:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.027 05:15:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.027 05:15:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.027 05:15:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.027 05:15:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.027 05:15:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.027 05:15:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.027 05:15:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.027 05:15:21 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.027 05:15:21 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.027 05:15:21 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:44.027 05:15:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:44.027 05:15:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.027 05:15:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:44.027 05:15:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:44.027 05:15:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:44.027 05:15:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.027 05:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.027 05:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.027 05:15:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:44.027 05:15:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:44.027 05:15:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.027 05:15:21 -- common/autotest_common.sh@10 -- # set +x 00:19:45.924 05:15:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:45.924 05:15:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.924 05:15:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.924 05:15:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.924 05:15:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.925 05:15:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.925 05:15:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.925 05:15:23 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.925 05:15:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.925 05:15:23 -- nvmf/common.sh@296 -- # e810=() 00:19:45.925 05:15:23 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.925 05:15:23 -- nvmf/common.sh@297 -- # x722=() 00:19:45.925 05:15:23 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.925 05:15:23 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.925 05:15:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.925 05:15:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.925 05:15:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.925 05:15:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.925 05:15:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.925 05:15:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:45.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:45.925 05:15:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.925 05:15:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:45.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:45.925 05:15:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.925 05:15:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.925 05:15:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.925 05:15:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:45.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:45.925 05:15:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.925 05:15:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.925 05:15:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.925 05:15:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.925 05:15:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:45.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:45.925 05:15:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.925 05:15:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:45.925 05:15:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:45.925 05:15:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:45.925 05:15:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.925 05:15:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.925 05:15:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.925 05:15:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.925 05:15:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.925 05:15:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.925 05:15:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.925 05:15:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.925 05:15:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.925 05:15:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.182 05:15:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.182 05:15:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.182 05:15:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.182 05:15:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.182 05:15:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.182 05:15:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.182 05:15:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.182 05:15:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.182 05:15:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.182 05:15:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:19:46.182 00:19:46.182 --- 10.0.0.2 ping statistics --- 00:19:46.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.182 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:46.182 05:15:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:19:46.182 00:19:46.182 --- 10.0.0.1 ping statistics --- 00:19:46.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.182 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:46.182 05:15:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.182 05:15:23 -- nvmf/common.sh@411 -- # return 0 00:19:46.182 05:15:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:46.182 05:15:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.182 05:15:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:46.182 05:15:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:46.182 05:15:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.182 05:15:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:46.182 05:15:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:46.182 05:15:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:46.182 05:15:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:46.182 05:15:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:46.182 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.182 05:15:23 -- nvmf/common.sh@470 -- # nvmfpid=1902174 00:19:46.182 05:15:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:46.182 05:15:23 -- nvmf/common.sh@471 -- # waitforlisten 1902174 00:19:46.182 05:15:23 -- common/autotest_common.sh@817 -- # '[' -z 1902174 ']' 00:19:46.182 05:15:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.182 05:15:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.182 05:15:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.182 05:15:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.182 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.182 [2024-04-24 05:15:23.385350] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:46.182 [2024-04-24 05:15:23.385435] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.182 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.182 [2024-04-24 05:15:23.422888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:46.440 [2024-04-24 05:15:23.454879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.440 [2024-04-24 05:15:23.549210] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.440 [2024-04-24 05:15:23.549274] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.440 [2024-04-24 05:15:23.549300] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.440 [2024-04-24 05:15:23.549313] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.440 [2024-04-24 05:15:23.549326] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.440 [2024-04-24 05:15:23.549421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:46.440 [2024-04-24 05:15:23.549488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:46.440 [2024-04-24 05:15:23.549541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:46.440 [2024-04-24 05:15:23.549544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.440 05:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.440 05:15:23 -- common/autotest_common.sh@850 -- # return 0 00:19:46.440 05:15:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:46.440 05:15:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:46.440 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.440 05:15:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.440 05:15:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.440 05:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.440 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.440 [2024-04-24 05:15:23.690192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.440 05:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.440 05:15:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:46.440 05:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.440 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.697 Malloc0 00:19:46.697 05:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.697 05:15:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.697 05:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.697 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.697 05:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.697 05:15:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.697 05:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.698 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.698 05:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.698 05:15:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.698 05:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.698 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.698 [2024-04-24 05:15:23.743688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.698 05:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.698 05:15:23 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:46.698 05:15:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:46.698 05:15:23 -- nvmf/common.sh@521 -- # config=() 00:19:46.698 05:15:23 -- nvmf/common.sh@521 -- # local subsystem config 00:19:46.698 05:15:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:46.698 05:15:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:46.698 { 00:19:46.698 "params": { 00:19:46.698 "name": "Nvme$subsystem", 00:19:46.698 "trtype": "$TEST_TRANSPORT", 00:19:46.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.698 "adrfam": "ipv4", 00:19:46.698 "trsvcid": "$NVMF_PORT", 00:19:46.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.698 "hdgst": ${hdgst:-false}, 00:19:46.698 "ddgst": ${ddgst:-false} 00:19:46.698 }, 00:19:46.698 "method": "bdev_nvme_attach_controller" 00:19:46.698 } 00:19:46.698 EOF 00:19:46.698 )") 00:19:46.698 05:15:23 -- nvmf/common.sh@543 -- # cat 00:19:46.698 05:15:23 -- nvmf/common.sh@545 -- # jq . 00:19:46.698 05:15:23 -- nvmf/common.sh@546 -- # IFS=, 00:19:46.698 05:15:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:46.698 "params": { 00:19:46.698 "name": "Nvme1", 00:19:46.698 "trtype": "tcp", 00:19:46.698 "traddr": "10.0.0.2", 00:19:46.698 "adrfam": "ipv4", 00:19:46.698 "trsvcid": "4420", 00:19:46.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.698 "hdgst": false, 00:19:46.698 "ddgst": false 00:19:46.698 }, 00:19:46.698 "method": "bdev_nvme_attach_controller" 00:19:46.698 }' 00:19:46.698 [2024-04-24 05:15:23.788808] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:46.698 [2024-04-24 05:15:23.788888] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902206 ] 00:19:46.698 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.698 [2024-04-24 05:15:23.820968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:46.698 [2024-04-24 05:15:23.850568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.698 [2024-04-24 05:15:23.938917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.698 [2024-04-24 05:15:23.938980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.698 [2024-04-24 05:15:23.938983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.955 I/O targets: 00:19:46.955 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:46.955 00:19:46.955 00:19:46.955 CUnit - A unit testing framework for C - Version 2.1-3 00:19:46.955 http://cunit.sourceforge.net/ 00:19:46.955 00:19:46.955 00:19:46.955 Suite: bdevio tests on: Nvme1n1 00:19:47.212 Test: blockdev write read block ...passed 00:19:47.212 Test: blockdev write zeroes read block ...passed 00:19:47.212 Test: blockdev write zeroes read no split ...passed 00:19:47.212 Test: blockdev write zeroes read split ...passed 00:19:47.212 Test: blockdev write zeroes read split partial ...passed 00:19:47.213 Test: blockdev reset ...[2024-04-24 05:15:24.403481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.213 [2024-04-24 05:15:24.403600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50400 (9): Bad file descriptor 00:19:47.213 [2024-04-24 05:15:24.415780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:47.213 passed 00:19:47.213 Test: blockdev write read 8 blocks ...passed 00:19:47.213 Test: blockdev write read size > 128k ...passed 00:19:47.213 Test: blockdev write read invalid size ...passed 00:19:47.471 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:47.471 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:47.471 Test: blockdev write read max offset ...passed 00:19:47.471 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:47.471 Test: blockdev writev readv 8 blocks ...passed 00:19:47.471 Test: blockdev writev readv 30 x 1block ...passed 00:19:47.471 Test: blockdev writev readv block ...passed 00:19:47.471 Test: blockdev writev readv size > 128k ...passed 00:19:47.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:47.471 Test: blockdev comparev and writev ...[2024-04-24 05:15:24.635225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.635269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.635295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.635677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.635706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.635729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.635754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.636120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.636142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.636506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.636529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.636551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.471 [2024-04-24 05:15:24.636566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:47.471 passed 00:19:47.471 Test: blockdev nvme passthru rw ...passed 00:19:47.471 Test: blockdev nvme passthru vendor specific ...[2024-04-24 05:15:24.720959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.471 [2024-04-24 05:15:24.720986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.721177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.471 [2024-04-24 05:15:24.721201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.721384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.471 [2024-04-24 05:15:24.721407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:47.471 [2024-04-24 05:15:24.721592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.471 [2024-04-24 05:15:24.721615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:47.471 passed 00:19:47.471 Test: blockdev nvme admin passthru ...passed 00:19:47.730 Test: blockdev copy ...passed 00:19:47.730 00:19:47.730 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.730 suites 1 1 n/a 0 0 00:19:47.730 tests 23 23 23 0 0 00:19:47.730 asserts 152 152 152 0 n/a 00:19:47.730 00:19:47.730 Elapsed time = 1.158 seconds 00:19:47.730 05:15:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.730 05:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.730 05:15:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.730 05:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.730 05:15:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:47.730 05:15:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:47.730 05:15:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:47.730 05:15:24 -- nvmf/common.sh@117 -- # sync 00:19:47.730 05:15:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.730 05:15:24 -- nvmf/common.sh@120 -- # set +e 00:19:47.730 05:15:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.730 05:15:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.730 rmmod nvme_tcp 00:19:47.730 rmmod nvme_fabrics 00:19:47.989 rmmod nvme_keyring 00:19:47.989 05:15:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.989 05:15:25 -- nvmf/common.sh@124 -- # set -e 00:19:47.989 05:15:25 -- nvmf/common.sh@125 -- # return 0 00:19:47.989 05:15:25 -- nvmf/common.sh@478 -- # '[' -n 1902174 ']' 00:19:47.989 05:15:25 -- nvmf/common.sh@479 -- # killprocess 1902174 00:19:47.989 05:15:25 -- common/autotest_common.sh@936 -- # '[' -z 1902174 ']' 00:19:47.989 05:15:25 -- common/autotest_common.sh@940 -- # kill -0 1902174 00:19:47.989 05:15:25 -- common/autotest_common.sh@941 -- # uname 00:19:47.989 05:15:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.989 05:15:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1902174 00:19:47.989 05:15:25 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:47.989 05:15:25 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:47.989 05:15:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1902174' 00:19:47.989 killing process with pid 1902174 00:19:47.989 05:15:25 -- common/autotest_common.sh@955 -- # kill 1902174 00:19:47.989 05:15:25 -- common/autotest_common.sh@960 -- # wait 1902174 00:19:48.266 05:15:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:48.266 05:15:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:48.266 05:15:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:48.266 05:15:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.266 05:15:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.266 05:15:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.266 05:15:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.266 05:15:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.182 05:15:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.182 00:19:50.182 real 0m6.214s 00:19:50.182 user 0m9.836s 00:19:50.182 sys 0m2.097s 00:19:50.182 05:15:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:50.182 05:15:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.182 ************************************ 00:19:50.182 END TEST nvmf_bdevio 00:19:50.182 ************************************ 00:19:50.182 05:15:27 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:50.182 05:15:27 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.182 05:15:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:50.182 05:15:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.182 05:15:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.441 ************************************ 00:19:50.441 START TEST nvmf_bdevio_no_huge 00:19:50.441 ************************************ 00:19:50.441 05:15:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.441 * Looking for test storage... 00:19:50.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.441 05:15:27 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.441 05:15:27 -- nvmf/common.sh@7 -- # uname -s 00:19:50.441 05:15:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.441 05:15:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.441 05:15:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.441 05:15:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.441 05:15:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.441 05:15:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.441 05:15:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.441 05:15:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.441 05:15:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.441 05:15:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.441 05:15:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.441 05:15:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.441 05:15:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.441 05:15:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.441 05:15:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.441 05:15:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.441 05:15:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.441 05:15:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.441 05:15:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.441 05:15:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.441 05:15:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.441 05:15:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.441 05:15:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.441 05:15:27 -- paths/export.sh@5 -- # export PATH 00:19:50.441 05:15:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.441 05:15:27 -- nvmf/common.sh@47 -- # : 0 00:19:50.441 05:15:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.441 05:15:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.441 05:15:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.441 05:15:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.441 05:15:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.441 05:15:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.441 05:15:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.441 05:15:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.441 05:15:27 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.441 05:15:27 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.441 05:15:27 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:50.441 05:15:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:50.441 05:15:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.441 05:15:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:50.441 05:15:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:50.441 05:15:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:50.441 05:15:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.441 05:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.441 05:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.441 05:15:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:50.441 05:15:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:50.441 05:15:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.441 05:15:27 -- common/autotest_common.sh@10 -- # set +x 00:19:52.343 05:15:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.343 05:15:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.343 05:15:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.343 05:15:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.343 05:15:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.343 05:15:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.343 05:15:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.343 05:15:29 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.343 05:15:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.343 05:15:29 -- nvmf/common.sh@296 -- # e810=() 00:19:52.343 05:15:29 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.343 05:15:29 -- nvmf/common.sh@297 -- # x722=() 00:19:52.343 05:15:29 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.343 05:15:29 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.343 05:15:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.343 05:15:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.343 05:15:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.344 05:15:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.344 05:15:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.344 05:15:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.344 05:15:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.344 05:15:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.344 05:15:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.344 05:15:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.344 05:15:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.344 05:15:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.344 05:15:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.344 05:15:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.344 05:15:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.344 05:15:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.344 05:15:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.344 05:15:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.344 05:15:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.344 05:15:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:52.344 05:15:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:52.344 05:15:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.344 05:15:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.344 05:15:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.344 05:15:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.344 05:15:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.344 05:15:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.344 05:15:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.344 05:15:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.344 05:15:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.344 05:15:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.344 05:15:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.344 05:15:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.344 05:15:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.344 05:15:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.344 05:15:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.602 05:15:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.602 05:15:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.602 05:15:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.602 05:15:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.602 05:15:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:52.602 00:19:52.602 --- 10.0.0.2 ping statistics --- 00:19:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.602 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:52.602 05:15:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:52.603 00:19:52.603 --- 10.0.0.1 ping statistics --- 00:19:52.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.603 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:52.603 05:15:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.603 05:15:29 -- nvmf/common.sh@411 -- # return 0 00:19:52.603 05:15:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.603 05:15:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.603 05:15:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:52.603 05:15:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:52.603 05:15:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.603 05:15:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:52.603 05:15:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:52.603 05:15:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:52.603 05:15:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.603 05:15:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.603 05:15:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.603 05:15:29 -- nvmf/common.sh@470 -- # nvmfpid=1904397 00:19:52.603 05:15:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:52.603 05:15:29 -- nvmf/common.sh@471 -- # waitforlisten 1904397 00:19:52.603 05:15:29 -- common/autotest_common.sh@817 -- # '[' -z 1904397 ']' 00:19:52.603 05:15:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.603 05:15:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.603 05:15:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.603 05:15:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.603 05:15:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.603 [2024-04-24 05:15:29.739932] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:52.603 [2024-04-24 05:15:29.740019] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:52.603 [2024-04-24 05:15:29.788198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:52.603 [2024-04-24 05:15:29.808314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.861 [2024-04-24 05:15:29.895214] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.862 [2024-04-24 05:15:29.895275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.862 [2024-04-24 05:15:29.895291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.862 [2024-04-24 05:15:29.895305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.862 [2024-04-24 05:15:29.895318] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.862 [2024-04-24 05:15:29.895406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.862 [2024-04-24 05:15:29.895459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:52.862 [2024-04-24 05:15:29.895509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:52.862 [2024-04-24 05:15:29.895512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.862 05:15:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.862 05:15:29 -- common/autotest_common.sh@850 -- # return 0 00:19:52.862 05:15:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:52.862 05:15:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.862 05:15:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 05:15:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.862 05:15:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.862 05:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.862 05:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 [2024-04-24 05:15:30.012767] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.862 05:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.862 05:15:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:52.862 05:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.862 05:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 Malloc0 00:19:52.862 05:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.862 05:15:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.862 05:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.862 05:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 05:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.862 05:15:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.862 05:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.862 05:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 05:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.862 05:15:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.862 05:15:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.862 05:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 [2024-04-24 05:15:30.051599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.862 05:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.862 05:15:30 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:52.862 05:15:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:52.862 05:15:30 -- nvmf/common.sh@521 -- # config=() 00:19:52.862 05:15:30 -- nvmf/common.sh@521 -- # local subsystem config 00:19:52.862 05:15:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:52.862 05:15:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:52.862 { 00:19:52.862 "params": { 00:19:52.862 "name": "Nvme$subsystem", 00:19:52.862 "trtype": "$TEST_TRANSPORT", 00:19:52.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.862 "adrfam": "ipv4", 00:19:52.862 "trsvcid": "$NVMF_PORT", 00:19:52.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.862 "hdgst": ${hdgst:-false}, 00:19:52.862 "ddgst": ${ddgst:-false} 00:19:52.862 }, 00:19:52.862 "method": "bdev_nvme_attach_controller" 00:19:52.862 } 00:19:52.862 EOF 00:19:52.862 )") 00:19:52.862 05:15:30 -- nvmf/common.sh@543 -- # cat 00:19:52.862 05:15:30 -- nvmf/common.sh@545 -- # jq . 00:19:52.862 05:15:30 -- nvmf/common.sh@546 -- # IFS=, 00:19:52.862 05:15:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:52.862 "params": { 00:19:52.862 "name": "Nvme1", 00:19:52.862 "trtype": "tcp", 00:19:52.862 "traddr": "10.0.0.2", 00:19:52.862 "adrfam": "ipv4", 00:19:52.862 "trsvcid": "4420", 00:19:52.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.862 "hdgst": false, 00:19:52.862 "ddgst": false 00:19:52.862 }, 00:19:52.862 "method": "bdev_nvme_attach_controller" 00:19:52.862 }' 00:19:52.862 [2024-04-24 05:15:30.095734] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:52.862 [2024-04-24 05:15:30.095837] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1904424 ] 00:19:53.120 [2024-04-24 05:15:30.138125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:53.120 [2024-04-24 05:15:30.157887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.120 [2024-04-24 05:15:30.243001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.120 [2024-04-24 05:15:30.243051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.120 [2024-04-24 05:15:30.243054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.378 I/O targets: 00:19:53.378 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:53.378 00:19:53.378 00:19:53.378 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.378 http://cunit.sourceforge.net/ 00:19:53.378 00:19:53.378 00:19:53.378 Suite: bdevio tests on: Nvme1n1 00:19:53.378 Test: blockdev write read block ...passed 00:19:53.378 Test: blockdev write zeroes read block ...passed 00:19:53.636 Test: blockdev write zeroes read no split ...passed 00:19:53.636 Test: blockdev write zeroes read split ...passed 00:19:53.636 Test: blockdev write zeroes read split partial ...passed 00:19:53.636 Test: blockdev reset ...[2024-04-24 05:15:30.770080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.636 [2024-04-24 05:15:30.770196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219d840 (9): Bad file descriptor 00:19:53.636 [2024-04-24 05:15:30.825672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.636 passed 00:19:53.636 Test: blockdev write read 8 blocks ...passed 00:19:53.636 Test: blockdev write read size > 128k ...passed 00:19:53.636 Test: blockdev write read invalid size ...passed 00:19:53.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.636 Test: blockdev write read max offset ...passed 00:19:53.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.894 Test: blockdev writev readv 8 blocks ...passed 00:19:53.894 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.894 Test: blockdev writev readv block ...passed 00:19:53.894 Test: blockdev writev readv size > 128k ...passed 00:19:53.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.894 Test: blockdev comparev and writev ...[2024-04-24 05:15:31.000205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.894 [2024-04-24 05:15:31.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.894 [2024-04-24 05:15:31.000266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.894 [2024-04-24 05:15:31.000289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.894 [2024-04-24 05:15:31.000634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.894 [2024-04-24 05:15:31.000660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.894 [2024-04-24 05:15:31.000682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.894 [2024-04-24 05:15:31.000698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.001041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.895 [2024-04-24 05:15:31.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.001087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.895 [2024-04-24 05:15:31.001103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.001432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.895 [2024-04-24 05:15:31.001456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.001477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.895 [2024-04-24 05:15:31.001493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.895 passed 00:19:53.895 Test: blockdev nvme passthru rw ...passed 00:19:53.895 Test: blockdev nvme passthru vendor specific ...[2024-04-24 05:15:31.084935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:53.895 [2024-04-24 05:15:31.084963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.085139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:53.895 [2024-04-24 05:15:31.085163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.085335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:53.895 [2024-04-24 05:15:31.085358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.895 [2024-04-24 05:15:31.085539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:53.895 [2024-04-24 05:15:31.085562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.895 passed 00:19:53.895 Test: blockdev nvme admin passthru ...passed 00:19:53.895 Test: blockdev copy ...passed 00:19:53.895 00:19:53.895 Run Summary: Type Total Ran Passed Failed Inactive 00:19:53.895 suites 1 1 n/a 0 0 00:19:53.895 tests 23 23 23 0 0 00:19:53.895 asserts 152 152 152 0 n/a 00:19:53.895 00:19:53.895 Elapsed time = 1.171 seconds 00:19:54.461 05:15:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.461 05:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.461 05:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.461 05:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.461 05:15:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:54.461 05:15:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:54.461 05:15:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:54.461 05:15:31 -- nvmf/common.sh@117 -- # sync 00:19:54.461 05:15:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.461 05:15:31 -- nvmf/common.sh@120 -- # set +e 00:19:54.461 05:15:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.461 05:15:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.461 rmmod nvme_tcp 00:19:54.461 rmmod nvme_fabrics 00:19:54.461 rmmod nvme_keyring 00:19:54.461 05:15:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.461 05:15:31 -- nvmf/common.sh@124 -- # set -e 00:19:54.461 05:15:31 -- nvmf/common.sh@125 -- # return 0 00:19:54.461 05:15:31 -- nvmf/common.sh@478 -- # '[' -n 1904397 ']' 00:19:54.461 05:15:31 -- nvmf/common.sh@479 -- # killprocess 1904397 00:19:54.461 05:15:31 -- common/autotest_common.sh@936 -- # '[' -z 1904397 ']' 00:19:54.461 05:15:31 -- common/autotest_common.sh@940 -- # kill -0 1904397 00:19:54.461 05:15:31 -- common/autotest_common.sh@941 -- # uname 00:19:54.461 05:15:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.461 05:15:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1904397 00:19:54.461 05:15:31 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:54.461 05:15:31 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:54.461 05:15:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1904397' 00:19:54.461 killing process with pid 1904397 00:19:54.461 05:15:31 -- common/autotest_common.sh@955 -- # kill 1904397 00:19:54.461 05:15:31 -- common/autotest_common.sh@960 -- # wait 1904397 00:19:54.720 05:15:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:54.720 05:15:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:54.720 05:15:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:54.720 05:15:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.720 05:15:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.720 05:15:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.720 05:15:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.720 05:15:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.254 05:15:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:57.254 00:19:57.254 real 0m6.499s 00:19:57.254 user 0m10.799s 00:19:57.254 sys 0m2.555s 00:19:57.254 05:15:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.254 05:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:57.254 ************************************ 00:19:57.255 END TEST nvmf_bdevio_no_huge 00:19:57.255 ************************************ 00:19:57.255 05:15:33 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.255 05:15:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.255 05:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.255 05:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:57.255 ************************************ 00:19:57.255 START TEST nvmf_tls 00:19:57.255 ************************************ 00:19:57.255 05:15:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.255 * Looking for test storage... 00:19:57.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.255 05:15:34 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.255 05:15:34 -- nvmf/common.sh@7 -- # uname -s 00:19:57.255 05:15:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.255 05:15:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.255 05:15:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.255 05:15:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.255 05:15:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.255 05:15:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.255 05:15:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.255 05:15:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.255 05:15:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.255 05:15:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.255 05:15:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.255 05:15:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.255 05:15:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.255 05:15:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.255 05:15:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.255 05:15:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.255 05:15:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.255 05:15:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.255 05:15:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.255 05:15:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.255 05:15:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.255 05:15:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.255 05:15:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.255 05:15:34 -- paths/export.sh@5 -- # export PATH 00:19:57.255 05:15:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.255 05:15:34 -- nvmf/common.sh@47 -- # : 0 00:19:57.255 05:15:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.255 05:15:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.255 05:15:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.255 05:15:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.255 05:15:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.255 05:15:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.255 05:15:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.255 05:15:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.255 05:15:34 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.255 05:15:34 -- target/tls.sh@62 -- # nvmftestinit 00:19:57.255 05:15:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:57.255 05:15:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.255 05:15:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:57.255 05:15:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:57.255 05:15:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:57.255 05:15:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.255 05:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.255 05:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.255 05:15:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:57.255 05:15:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:57.255 05:15:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.255 05:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 05:15:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:59.155 05:15:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.155 05:15:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.155 05:15:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.155 05:15:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.155 05:15:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.155 05:15:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.155 05:15:36 -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.155 05:15:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.155 05:15:36 -- nvmf/common.sh@296 -- # e810=() 00:19:59.155 05:15:36 -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.155 05:15:36 -- nvmf/common.sh@297 -- # x722=() 00:19:59.155 05:15:36 -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.155 05:15:36 -- nvmf/common.sh@298 -- # mlx=() 00:19:59.155 05:15:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.155 05:15:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.155 05:15:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.155 05:15:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.155 05:15:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.155 05:15:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.155 05:15:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.155 05:15:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.155 05:15:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.155 05:15:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.155 05:15:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.155 05:15:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.155 05:15:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.155 05:15:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.155 05:15:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:59.155 05:15:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:59.155 05:15:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.155 05:15:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.155 05:15:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.155 05:15:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.155 05:15:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.155 05:15:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.155 05:15:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.155 05:15:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.155 05:15:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.155 05:15:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.155 05:15:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.155 05:15:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.155 05:15:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.155 05:15:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.155 05:15:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.155 05:15:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.155 05:15:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.155 05:15:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.155 05:15:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:19:59.155 00:19:59.155 --- 10.0.0.2 ping statistics --- 00:19:59.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.155 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:19:59.155 05:15:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:59.155 00:19:59.155 --- 10.0.0.1 ping statistics --- 00:19:59.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.155 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:59.155 05:15:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.155 05:15:36 -- nvmf/common.sh@411 -- # return 0 00:19:59.155 05:15:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:59.155 05:15:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.155 05:15:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:59.155 05:15:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.155 05:15:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:59.155 05:15:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:59.155 05:15:36 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:59.155 05:15:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:59.155 05:15:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:59.155 05:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 05:15:36 -- nvmf/common.sh@470 -- # nvmfpid=1906503 00:19:59.155 05:15:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:59.155 05:15:36 -- nvmf/common.sh@471 -- # waitforlisten 1906503 00:19:59.155 05:15:36 -- common/autotest_common.sh@817 -- # '[' -z 1906503 ']' 00:19:59.155 05:15:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.155 05:15:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:59.155 05:15:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.155 05:15:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:59.155 05:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 [2024-04-24 05:15:36.222263] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:19:59.155 [2024-04-24 05:15:36.222346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.155 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.155 [2024-04-24 05:15:36.261166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:59.155 [2024-04-24 05:15:36.287252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.155 [2024-04-24 05:15:36.369335] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.155 [2024-04-24 05:15:36.369389] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.155 [2024-04-24 05:15:36.369417] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.155 [2024-04-24 05:15:36.369429] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.155 [2024-04-24 05:15:36.369439] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.155 [2024-04-24 05:15:36.369469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.156 05:15:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:59.156 05:15:36 -- common/autotest_common.sh@850 -- # return 0 00:19:59.156 05:15:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:59.156 05:15:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:59.156 05:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.413 05:15:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.413 05:15:36 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:59.413 05:15:36 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:59.413 true 00:19:59.413 05:15:36 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.413 05:15:36 -- target/tls.sh@73 -- # jq -r .tls_version 00:19:59.670 05:15:36 -- target/tls.sh@73 -- # version=0 00:19:59.670 05:15:36 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:59.670 05:15:36 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:59.928 05:15:37 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.928 05:15:37 -- target/tls.sh@81 -- # jq -r .tls_version 00:20:00.187 05:15:37 -- target/tls.sh@81 -- # version=13 00:20:00.187 05:15:37 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:00.187 05:15:37 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:00.446 05:15:37 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.446 05:15:37 -- target/tls.sh@89 -- # jq -r .tls_version 00:20:00.705 05:15:37 -- target/tls.sh@89 -- # version=7 00:20:00.705 05:15:37 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:00.705 05:15:37 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.705 05:15:37 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:00.963 05:15:38 -- target/tls.sh@96 -- # ktls=false 00:20:00.963 05:15:38 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:00.963 05:15:38 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:01.222 05:15:38 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.222 05:15:38 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:01.481 05:15:38 -- target/tls.sh@104 -- # ktls=true 00:20:01.481 05:15:38 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:01.481 05:15:38 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:01.741 05:15:38 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.741 05:15:38 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:02.000 05:15:39 -- target/tls.sh@112 -- # ktls=false 00:20:02.000 05:15:39 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:02.000 05:15:39 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:02.000 05:15:39 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:02.000 05:15:39 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # digest=1 00:20:02.000 05:15:39 -- nvmf/common.sh@694 -- # python - 00:20:02.000 05:15:39 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.000 05:15:39 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:02.000 05:15:39 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:02.000 05:15:39 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:20:02.000 05:15:39 -- nvmf/common.sh@693 -- # digest=1 00:20:02.000 05:15:39 -- nvmf/common.sh@694 -- # python - 00:20:02.000 05:15:39 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.000 05:15:39 -- target/tls.sh@121 -- # mktemp 00:20:02.000 05:15:39 -- target/tls.sh@121 -- # key_path=/tmp/tmp.nIVh4m315a 00:20:02.000 05:15:39 -- target/tls.sh@122 -- # mktemp 00:20:02.000 05:15:39 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.cnUuWqvuQh 00:20:02.000 05:15:39 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.000 05:15:39 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.000 05:15:39 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.nIVh4m315a 00:20:02.000 05:15:39 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cnUuWqvuQh 00:20:02.000 05:15:39 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:02.259 05:15:39 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:02.862 05:15:39 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.nIVh4m315a 00:20:02.862 05:15:39 -- target/tls.sh@49 -- # local key=/tmp/tmp.nIVh4m315a 00:20:02.862 05:15:39 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.862 [2024-04-24 05:15:40.069040] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.862 05:15:40 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.122 05:15:40 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.382 [2024-04-24 05:15:40.558343] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.382 [2024-04-24 05:15:40.558602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.382 05:15:40 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.641 malloc0 00:20:03.641 05:15:40 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.899 05:15:41 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nIVh4m315a 00:20:04.157 [2024-04-24 05:15:41.380806] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:04.157 05:15:41 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nIVh4m315a 00:20:04.417 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.405 Initializing NVMe Controllers 00:20:14.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.405 Initialization complete. Launching workers. 00:20:14.405 ======================================================== 00:20:14.405 Latency(us) 00:20:14.405 Device Information : IOPS MiB/s Average min max 00:20:14.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7748.06 30.27 8262.56 1086.02 9560.67 00:20:14.405 ======================================================== 00:20:14.405 Total : 7748.06 30.27 8262.56 1086.02 9560.67 00:20:14.405 00:20:14.405 05:15:51 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIVh4m315a 00:20:14.405 05:15:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.405 05:15:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.405 05:15:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.405 05:15:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nIVh4m315a' 00:20:14.405 05:15:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.405 05:15:51 -- target/tls.sh@28 -- # bdevperf_pid=1908390 00:20:14.405 05:15:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.405 05:15:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.405 05:15:51 -- target/tls.sh@31 -- # waitforlisten 1908390 /var/tmp/bdevperf.sock 00:20:14.405 05:15:51 -- common/autotest_common.sh@817 -- # '[' -z 1908390 ']' 00:20:14.405 05:15:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.405 05:15:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.405 05:15:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.405 05:15:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.405 05:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:14.405 [2024-04-24 05:15:51.548814] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:14.405 [2024-04-24 05:15:51.548901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908390 ] 00:20:14.405 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.405 [2024-04-24 05:15:51.580778] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.405 [2024-04-24 05:15:51.606741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.663 [2024-04-24 05:15:51.689081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.663 05:15:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.663 05:15:51 -- common/autotest_common.sh@850 -- # return 0 00:20:14.663 05:15:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nIVh4m315a 00:20:14.920 [2024-04-24 05:15:52.029835] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.920 [2024-04-24 05:15:52.029960] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:14.920 TLSTESTn1 00:20:14.920 05:15:52 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.178 Running I/O for 10 seconds... 00:20:25.167 00:20:25.167 Latency(us) 00:20:25.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.168 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.168 Verification LBA range: start 0x0 length 0x2000 00:20:25.168 TLSTESTn1 : 10.04 2997.05 11.71 0.00 0.00 42604.96 5946.79 72623.60 00:20:25.168 =================================================================================================================== 00:20:25.168 Total : 2997.05 11.71 0.00 0.00 42604.96 5946.79 72623.60 00:20:25.168 0 00:20:25.168 05:16:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.168 05:16:02 -- target/tls.sh@45 -- # killprocess 1908390 00:20:25.168 05:16:02 -- common/autotest_common.sh@936 -- # '[' -z 1908390 ']' 00:20:25.168 05:16:02 -- common/autotest_common.sh@940 -- # kill -0 1908390 00:20:25.168 05:16:02 -- common/autotest_common.sh@941 -- # uname 00:20:25.168 05:16:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.168 05:16:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1908390 00:20:25.168 05:16:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:25.168 05:16:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:25.168 05:16:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1908390' 00:20:25.168 killing process with pid 1908390 00:20:25.168 05:16:02 -- common/autotest_common.sh@955 -- # kill 1908390 00:20:25.168 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.168 00:20:25.168 Latency(us) 00:20:25.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.168 =================================================================================================================== 00:20:25.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.168 [2024-04-24 05:16:02.344808] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.168 05:16:02 -- common/autotest_common.sh@960 -- # wait 1908390 00:20:25.427 05:16:02 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnUuWqvuQh 00:20:25.427 05:16:02 -- common/autotest_common.sh@638 -- # local es=0 00:20:25.427 05:16:02 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnUuWqvuQh 00:20:25.427 05:16:02 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:25.427 05:16:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:25.427 05:16:02 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:25.427 05:16:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:25.427 05:16:02 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnUuWqvuQh 00:20:25.427 05:16:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.427 05:16:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.427 05:16:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.427 05:16:02 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cnUuWqvuQh' 00:20:25.427 05:16:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.427 05:16:02 -- target/tls.sh@28 -- # bdevperf_pid=1909591 00:20:25.427 05:16:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.427 05:16:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.427 05:16:02 -- target/tls.sh@31 -- # waitforlisten 1909591 /var/tmp/bdevperf.sock 00:20:25.427 05:16:02 -- common/autotest_common.sh@817 -- # '[' -z 1909591 ']' 00:20:25.427 05:16:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.427 05:16:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:25.427 05:16:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.427 05:16:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:25.427 05:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:25.427 [2024-04-24 05:16:02.616324] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:25.427 [2024-04-24 05:16:02.616414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909591 ] 00:20:25.427 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.427 [2024-04-24 05:16:02.654495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:25.427 [2024-04-24 05:16:02.683233] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.685 [2024-04-24 05:16:02.771736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.685 05:16:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:25.685 05:16:02 -- common/autotest_common.sh@850 -- # return 0 00:20:25.685 05:16:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnUuWqvuQh 00:20:25.945 [2024-04-24 05:16:03.090389] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.945 [2024-04-24 05:16:03.090502] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.945 [2024-04-24 05:16:03.095758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.945 [2024-04-24 05:16:03.096306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17853b0 (107): Transport endpoint is not connected 00:20:25.945 [2024-04-24 05:16:03.097294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17853b0 (9): Bad file descriptor 00:20:25.945 [2024-04-24 05:16:03.098293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.945 [2024-04-24 05:16:03.098313] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:25.945 [2024-04-24 05:16:03.098340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.945 request: 00:20:25.945 { 00:20:25.945 "name": "TLSTEST", 00:20:25.945 "trtype": "tcp", 00:20:25.945 "traddr": "10.0.0.2", 00:20:25.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.945 "adrfam": "ipv4", 00:20:25.945 "trsvcid": "4420", 00:20:25.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.945 "psk": "/tmp/tmp.cnUuWqvuQh", 00:20:25.945 "method": "bdev_nvme_attach_controller", 00:20:25.945 "req_id": 1 00:20:25.945 } 00:20:25.945 Got JSON-RPC error response 00:20:25.945 response: 00:20:25.945 { 00:20:25.945 "code": -32602, 00:20:25.945 "message": "Invalid parameters" 00:20:25.945 } 00:20:25.945 05:16:03 -- target/tls.sh@36 -- # killprocess 1909591 00:20:25.945 05:16:03 -- common/autotest_common.sh@936 -- # '[' -z 1909591 ']' 00:20:25.945 05:16:03 -- common/autotest_common.sh@940 -- # kill -0 1909591 00:20:25.945 05:16:03 -- common/autotest_common.sh@941 -- # uname 00:20:25.945 05:16:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.945 05:16:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1909591 00:20:25.945 05:16:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:25.945 05:16:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:25.945 05:16:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1909591' 00:20:25.945 killing process with pid 1909591 00:20:25.945 05:16:03 -- common/autotest_common.sh@955 -- # kill 1909591 00:20:25.945 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.945 00:20:25.945 Latency(us) 00:20:25.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.945 =================================================================================================================== 00:20:25.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.945 [2024-04-24 05:16:03.143799] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.945 05:16:03 -- common/autotest_common.sh@960 -- # wait 1909591 00:20:26.204 05:16:03 -- target/tls.sh@37 -- # return 1 00:20:26.204 05:16:03 -- common/autotest_common.sh@641 -- # es=1 00:20:26.204 05:16:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:26.205 05:16:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:26.205 05:16:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:26.205 05:16:03 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nIVh4m315a 00:20:26.205 05:16:03 -- common/autotest_common.sh@638 -- # local es=0 00:20:26.205 05:16:03 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nIVh4m315a 00:20:26.205 05:16:03 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:26.205 05:16:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.205 05:16:03 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:26.205 05:16:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.205 05:16:03 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nIVh4m315a 00:20:26.205 05:16:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.205 05:16:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.205 05:16:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:26.205 05:16:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nIVh4m315a' 00:20:26.205 05:16:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.205 05:16:03 -- target/tls.sh@28 -- # bdevperf_pid=1909727 00:20:26.205 05:16:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.205 05:16:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.205 05:16:03 -- target/tls.sh@31 -- # waitforlisten 1909727 /var/tmp/bdevperf.sock 00:20:26.205 05:16:03 -- common/autotest_common.sh@817 -- # '[' -z 1909727 ']' 00:20:26.205 05:16:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.205 05:16:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.205 05:16:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.205 05:16:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.205 05:16:03 -- common/autotest_common.sh@10 -- # set +x 00:20:26.205 [2024-04-24 05:16:03.402772] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:26.205 [2024-04-24 05:16:03.402851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909727 ] 00:20:26.205 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.205 [2024-04-24 05:16:03.435102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:26.205 [2024-04-24 05:16:03.462920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.462 [2024-04-24 05:16:03.551578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.462 05:16:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:26.462 05:16:03 -- common/autotest_common.sh@850 -- # return 0 00:20:26.462 05:16:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.nIVh4m315a 00:20:26.721 [2024-04-24 05:16:03.931033] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.721 [2024-04-24 05:16:03.931161] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.721 [2024-04-24 05:16:03.936720] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:26.721 [2024-04-24 05:16:03.936753] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:26.721 [2024-04-24 05:16:03.936809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.721 [2024-04-24 05:16:03.937144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251d3b0 (107): Transport endpoint is not connected 00:20:26.721 [2024-04-24 05:16:03.938132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251d3b0 (9): Bad file descriptor 00:20:26.721 [2024-04-24 05:16:03.939132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:26.721 [2024-04-24 05:16:03.939153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:26.721 [2024-04-24 05:16:03.939182] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:26.721 request: 00:20:26.721 { 00:20:26.721 "name": "TLSTEST", 00:20:26.721 "trtype": "tcp", 00:20:26.721 "traddr": "10.0.0.2", 00:20:26.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.721 "adrfam": "ipv4", 00:20:26.721 "trsvcid": "4420", 00:20:26.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.721 "psk": "/tmp/tmp.nIVh4m315a", 00:20:26.721 "method": "bdev_nvme_attach_controller", 00:20:26.721 "req_id": 1 00:20:26.721 } 00:20:26.721 Got JSON-RPC error response 00:20:26.721 response: 00:20:26.721 { 00:20:26.721 "code": -32602, 00:20:26.721 "message": "Invalid parameters" 00:20:26.721 } 00:20:26.721 05:16:03 -- target/tls.sh@36 -- # killprocess 1909727 00:20:26.721 05:16:03 -- common/autotest_common.sh@936 -- # '[' -z 1909727 ']' 00:20:26.721 05:16:03 -- common/autotest_common.sh@940 -- # kill -0 1909727 00:20:26.721 05:16:03 -- common/autotest_common.sh@941 -- # uname 00:20:26.721 05:16:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.721 05:16:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1909727 00:20:26.721 05:16:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:26.721 05:16:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:26.721 05:16:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1909727' 00:20:26.721 killing process with pid 1909727 00:20:26.721 05:16:03 -- common/autotest_common.sh@955 -- # kill 1909727 00:20:26.721 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.721 00:20:26.721 Latency(us) 00:20:26.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.721 =================================================================================================================== 00:20:26.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.721 [2024-04-24 05:16:03.990361] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.721 05:16:03 -- common/autotest_common.sh@960 -- # wait 1909727 00:20:26.979 05:16:04 -- target/tls.sh@37 -- # return 1 00:20:26.979 05:16:04 -- common/autotest_common.sh@641 -- # es=1 00:20:26.979 05:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:26.979 05:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:26.979 05:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:26.979 05:16:04 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIVh4m315a 00:20:26.979 05:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:20:26.979 05:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIVh4m315a 00:20:26.979 05:16:04 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:26.979 05:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.979 05:16:04 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:26.979 05:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.979 05:16:04 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIVh4m315a 00:20:26.979 05:16:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.979 05:16:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:26.979 05:16:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.979 05:16:04 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nIVh4m315a' 00:20:26.979 05:16:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.979 05:16:04 -- target/tls.sh@28 -- # bdevperf_pid=1909866 00:20:26.979 05:16:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.979 05:16:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.979 05:16:04 -- target/tls.sh@31 -- # waitforlisten 1909866 /var/tmp/bdevperf.sock 00:20:26.979 05:16:04 -- common/autotest_common.sh@817 -- # '[' -z 1909866 ']' 00:20:26.979 05:16:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.979 05:16:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.979 05:16:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.979 05:16:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.979 05:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:27.237 [2024-04-24 05:16:04.254669] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:27.237 [2024-04-24 05:16:04.254746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909866 ] 00:20:27.237 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.237 [2024-04-24 05:16:04.285406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:27.237 [2024-04-24 05:16:04.312692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.237 [2024-04-24 05:16:04.396412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.237 05:16:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.237 05:16:04 -- common/autotest_common.sh@850 -- # return 0 00:20:27.237 05:16:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nIVh4m315a 00:20:27.495 [2024-04-24 05:16:04.723573] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.495 [2024-04-24 05:16:04.723734] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:27.495 [2024-04-24 05:16:04.729035] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:27.495 [2024-04-24 05:16:04.729067] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:27.495 [2024-04-24 05:16:04.729108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:27.495 [2024-04-24 05:16:04.729592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (107): Transport endpoint is not connected 00:20:27.495 [2024-04-24 05:16:04.730580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:20:27.495 [2024-04-24 05:16:04.731579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:27.495 [2024-04-24 05:16:04.731620] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:27.495 [2024-04-24 05:16:04.731643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:27.495 request: 00:20:27.495 { 00:20:27.495 "name": "TLSTEST", 00:20:27.495 "trtype": "tcp", 00:20:27.495 "traddr": "10.0.0.2", 00:20:27.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.495 "adrfam": "ipv4", 00:20:27.495 "trsvcid": "4420", 00:20:27.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.495 "psk": "/tmp/tmp.nIVh4m315a", 00:20:27.495 "method": "bdev_nvme_attach_controller", 00:20:27.495 "req_id": 1 00:20:27.495 } 00:20:27.495 Got JSON-RPC error response 00:20:27.495 response: 00:20:27.495 { 00:20:27.495 "code": -32602, 00:20:27.495 "message": "Invalid parameters" 00:20:27.495 } 00:20:27.495 05:16:04 -- target/tls.sh@36 -- # killprocess 1909866 00:20:27.495 05:16:04 -- common/autotest_common.sh@936 -- # '[' -z 1909866 ']' 00:20:27.495 05:16:04 -- common/autotest_common.sh@940 -- # kill -0 1909866 00:20:27.495 05:16:04 -- common/autotest_common.sh@941 -- # uname 00:20:27.495 05:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.495 05:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1909866 00:20:27.753 05:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:27.753 05:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:27.753 05:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1909866' 00:20:27.753 killing process with pid 1909866 00:20:27.753 05:16:04 -- common/autotest_common.sh@955 -- # kill 1909866 00:20:27.753 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.753 00:20:27.753 Latency(us) 00:20:27.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.753 =================================================================================================================== 00:20:27.753 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.753 [2024-04-24 05:16:04.783808] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:27.753 05:16:04 -- common/autotest_common.sh@960 -- # wait 1909866 00:20:27.753 05:16:04 -- target/tls.sh@37 -- # return 1 00:20:27.753 05:16:04 -- common/autotest_common.sh@641 -- # es=1 00:20:27.753 05:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:27.753 05:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:27.753 05:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:27.753 05:16:04 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.753 05:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:20:27.753 05:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.753 05:16:04 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:27.753 05:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:27.753 05:16:04 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:27.753 05:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:27.753 05:16:05 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.753 05:16:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.753 05:16:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.753 05:16:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.753 05:16:05 -- target/tls.sh@23 -- # psk= 00:20:27.753 05:16:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.753 05:16:05 -- target/tls.sh@28 -- # bdevperf_pid=1909913 00:20:27.753 05:16:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.753 05:16:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.753 05:16:05 -- target/tls.sh@31 -- # waitforlisten 1909913 /var/tmp/bdevperf.sock 00:20:27.753 05:16:05 -- common/autotest_common.sh@817 -- # '[' -z 1909913 ']' 00:20:27.753 05:16:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.753 05:16:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.753 05:16:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.753 05:16:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.753 05:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:28.015 [2024-04-24 05:16:05.047561] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:28.015 [2024-04-24 05:16:05.047674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909913 ] 00:20:28.015 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.015 [2024-04-24 05:16:05.081188] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:28.015 [2024-04-24 05:16:05.108453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.015 [2024-04-24 05:16:05.189475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.273 05:16:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.273 05:16:05 -- common/autotest_common.sh@850 -- # return 0 00:20:28.273 05:16:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:28.531 [2024-04-24 05:16:05.571058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:28.531 [2024-04-24 05:16:05.572890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f49c0 (9): Bad file descriptor 00:20:28.531 [2024-04-24 05:16:05.573887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:28.531 [2024-04-24 05:16:05.573910] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:28.531 [2024-04-24 05:16:05.573938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:28.531 request: 00:20:28.531 { 00:20:28.531 "name": "TLSTEST", 00:20:28.531 "trtype": "tcp", 00:20:28.531 "traddr": "10.0.0.2", 00:20:28.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.531 "adrfam": "ipv4", 00:20:28.531 "trsvcid": "4420", 00:20:28.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.531 "method": "bdev_nvme_attach_controller", 00:20:28.531 "req_id": 1 00:20:28.531 } 00:20:28.531 Got JSON-RPC error response 00:20:28.531 response: 00:20:28.531 { 00:20:28.531 "code": -32602, 00:20:28.531 "message": "Invalid parameters" 00:20:28.531 } 00:20:28.531 05:16:05 -- target/tls.sh@36 -- # killprocess 1909913 00:20:28.531 05:16:05 -- common/autotest_common.sh@936 -- # '[' -z 1909913 ']' 00:20:28.531 05:16:05 -- common/autotest_common.sh@940 -- # kill -0 1909913 00:20:28.531 05:16:05 -- common/autotest_common.sh@941 -- # uname 00:20:28.531 05:16:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.531 05:16:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1909913 00:20:28.531 05:16:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:28.531 05:16:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:28.531 05:16:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1909913' 00:20:28.531 killing process with pid 1909913 00:20:28.531 05:16:05 -- common/autotest_common.sh@955 -- # kill 1909913 00:20:28.531 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.531 00:20:28.531 Latency(us) 00:20:28.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.531 =================================================================================================================== 00:20:28.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.531 05:16:05 -- common/autotest_common.sh@960 -- # wait 1909913 00:20:28.789 05:16:05 -- target/tls.sh@37 -- # return 1 00:20:28.789 05:16:05 -- common/autotest_common.sh@641 -- # es=1 00:20:28.789 05:16:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:28.789 05:16:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:28.789 05:16:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:28.789 05:16:05 -- target/tls.sh@158 -- # killprocess 1906503 00:20:28.789 05:16:05 -- common/autotest_common.sh@936 -- # '[' -z 1906503 ']' 00:20:28.789 05:16:05 -- common/autotest_common.sh@940 -- # kill -0 1906503 00:20:28.789 05:16:05 -- common/autotest_common.sh@941 -- # uname 00:20:28.789 05:16:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.789 05:16:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1906503 00:20:28.789 05:16:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:28.789 05:16:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:28.789 05:16:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1906503' 00:20:28.789 killing process with pid 1906503 00:20:28.789 05:16:05 -- common/autotest_common.sh@955 -- # kill 1906503 00:20:28.789 [2024-04-24 05:16:05.861445] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:28.789 05:16:05 -- common/autotest_common.sh@960 -- # wait 1906503 00:20:29.047 05:16:06 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:29.047 05:16:06 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:29.047 05:16:06 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:29.047 05:16:06 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:29.047 05:16:06 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:29.047 05:16:06 -- nvmf/common.sh@693 -- # digest=2 00:20:29.047 05:16:06 -- nvmf/common.sh@694 -- # python - 00:20:29.047 05:16:06 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:29.047 05:16:06 -- target/tls.sh@160 -- # mktemp 00:20:29.047 05:16:06 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.13fwfAlw0c 00:20:29.047 05:16:06 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:29.047 05:16:06 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.13fwfAlw0c 00:20:29.047 05:16:06 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:29.047 05:16:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:29.047 05:16:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:29.047 05:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:29.047 05:16:06 -- nvmf/common.sh@470 -- # nvmfpid=1910089 00:20:29.047 05:16:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:29.047 05:16:06 -- nvmf/common.sh@471 -- # waitforlisten 1910089 00:20:29.047 05:16:06 -- common/autotest_common.sh@817 -- # '[' -z 1910089 ']' 00:20:29.047 05:16:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.047 05:16:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:29.047 05:16:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.047 05:16:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:29.047 05:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:29.047 [2024-04-24 05:16:06.188458] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:29.047 [2024-04-24 05:16:06.188535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.047 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.047 [2024-04-24 05:16:06.226419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:29.047 [2024-04-24 05:16:06.253348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.331 [2024-04-24 05:16:06.335915] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.331 [2024-04-24 05:16:06.335972] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.331 [2024-04-24 05:16:06.336001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.331 [2024-04-24 05:16:06.336019] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.331 [2024-04-24 05:16:06.336029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.331 [2024-04-24 05:16:06.336073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.331 05:16:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.331 05:16:06 -- common/autotest_common.sh@850 -- # return 0 00:20:29.331 05:16:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:29.331 05:16:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:29.331 05:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:29.331 05:16:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.331 05:16:06 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:20:29.331 05:16:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.13fwfAlw0c 00:20:29.331 05:16:06 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.589 [2024-04-24 05:16:06.670604] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.589 05:16:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.846 05:16:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:30.103 [2024-04-24 05:16:07.155885] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.103 [2024-04-24 05:16:07.156112] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.103 05:16:07 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:30.360 malloc0 00:20:30.360 05:16:07 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:30.618 05:16:07 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:30.618 [2024-04-24 05:16:07.873589] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:30.877 05:16:07 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.13fwfAlw0c 00:20:30.877 05:16:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.877 05:16:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.877 05:16:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.877 05:16:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.13fwfAlw0c' 00:20:30.877 05:16:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.877 05:16:07 -- target/tls.sh@28 -- # bdevperf_pid=1910310 00:20:30.877 05:16:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.877 05:16:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.877 05:16:07 -- target/tls.sh@31 -- # waitforlisten 1910310 /var/tmp/bdevperf.sock 00:20:30.877 05:16:07 -- common/autotest_common.sh@817 -- # '[' -z 1910310 ']' 00:20:30.877 05:16:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.877 05:16:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:30.877 05:16:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.877 05:16:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:30.877 05:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:30.877 [2024-04-24 05:16:07.930994] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:30.878 [2024-04-24 05:16:07.931073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910310 ] 00:20:30.878 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.878 [2024-04-24 05:16:07.964026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:30.878 [2024-04-24 05:16:07.991761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.878 [2024-04-24 05:16:08.076303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.136 05:16:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:31.136 05:16:08 -- common/autotest_common.sh@850 -- # return 0 00:20:31.136 05:16:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:31.395 [2024-04-24 05:16:08.414522] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.395 [2024-04-24 05:16:08.414710] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.395 TLSTESTn1 00:20:31.395 05:16:08 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:31.395 Running I/O for 10 seconds... 00:20:43.613 00:20:43.613 Latency(us) 00:20:43.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.613 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.613 Verification LBA range: start 0x0 length 0x2000 00:20:43.613 TLSTESTn1 : 10.04 3039.87 11.87 0.00 0.00 42005.68 5752.60 67963.26 00:20:43.613 =================================================================================================================== 00:20:43.613 Total : 3039.87 11.87 0.00 0.00 42005.68 5752.60 67963.26 00:20:43.613 0 00:20:43.613 05:16:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.613 05:16:18 -- target/tls.sh@45 -- # killprocess 1910310 00:20:43.613 05:16:18 -- common/autotest_common.sh@936 -- # '[' -z 1910310 ']' 00:20:43.613 05:16:18 -- common/autotest_common.sh@940 -- # kill -0 1910310 00:20:43.613 05:16:18 -- common/autotest_common.sh@941 -- # uname 00:20:43.613 05:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.613 05:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1910310 00:20:43.613 05:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:43.613 05:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:43.613 05:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1910310' 00:20:43.613 killing process with pid 1910310 00:20:43.613 05:16:18 -- common/autotest_common.sh@955 -- # kill 1910310 00:20:43.613 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.613 00:20:43.613 Latency(us) 00:20:43.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.613 =================================================================================================================== 00:20:43.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.613 [2024-04-24 05:16:18.706723] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.613 05:16:18 -- common/autotest_common.sh@960 -- # wait 1910310 00:20:43.613 05:16:18 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.13fwfAlw0c 00:20:43.613 05:16:18 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.13fwfAlw0c 00:20:43.613 05:16:18 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.613 05:16:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.13fwfAlw0c 00:20:43.613 05:16:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:43.613 05:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.613 05:16:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:43.613 05:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.613 05:16:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.13fwfAlw0c 00:20:43.613 05:16:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.613 05:16:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.613 05:16:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.613 05:16:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.13fwfAlw0c' 00:20:43.613 05:16:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.613 05:16:18 -- target/tls.sh@28 -- # bdevperf_pid=1911626 00:20:43.613 05:16:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.613 05:16:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.613 05:16:18 -- target/tls.sh@31 -- # waitforlisten 1911626 /var/tmp/bdevperf.sock 00:20:43.613 05:16:18 -- common/autotest_common.sh@817 -- # '[' -z 1911626 ']' 00:20:43.613 05:16:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.613 05:16:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.613 05:16:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.613 05:16:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.613 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:20:43.613 [2024-04-24 05:16:18.981313] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:43.613 [2024-04-24 05:16:18.981389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911626 ] 00:20:43.613 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.613 [2024-04-24 05:16:19.013513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:43.613 [2024-04-24 05:16:19.041513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.613 [2024-04-24 05:16:19.124871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.613 05:16:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.613 05:16:19 -- common/autotest_common.sh@850 -- # return 0 00:20:43.613 05:16:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:43.613 [2024-04-24 05:16:19.495484] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.613 [2024-04-24 05:16:19.495573] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:43.613 [2024-04-24 05:16:19.495588] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.13fwfAlw0c 00:20:43.613 request: 00:20:43.613 { 00:20:43.613 "name": "TLSTEST", 00:20:43.613 "trtype": "tcp", 00:20:43.613 "traddr": "10.0.0.2", 00:20:43.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.613 "adrfam": "ipv4", 00:20:43.613 "trsvcid": "4420", 00:20:43.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.613 "psk": "/tmp/tmp.13fwfAlw0c", 00:20:43.613 "method": "bdev_nvme_attach_controller", 00:20:43.613 "req_id": 1 00:20:43.613 } 00:20:43.613 Got JSON-RPC error response 00:20:43.613 response: 00:20:43.613 { 00:20:43.613 "code": -1, 00:20:43.613 "message": "Operation not permitted" 00:20:43.613 } 00:20:43.613 05:16:19 -- target/tls.sh@36 -- # killprocess 1911626 00:20:43.613 05:16:19 -- common/autotest_common.sh@936 -- # '[' -z 1911626 ']' 00:20:43.613 05:16:19 -- common/autotest_common.sh@940 -- # kill -0 1911626 00:20:43.613 05:16:19 -- common/autotest_common.sh@941 -- # uname 00:20:43.613 05:16:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.613 05:16:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1911626 00:20:43.613 05:16:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:43.613 05:16:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:43.613 05:16:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1911626' 00:20:43.613 killing process with pid 1911626 00:20:43.613 05:16:19 -- common/autotest_common.sh@955 -- # kill 1911626 00:20:43.613 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.613 00:20:43.613 Latency(us) 00:20:43.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.613 =================================================================================================================== 00:20:43.613 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.613 05:16:19 -- common/autotest_common.sh@960 -- # wait 1911626 00:20:43.613 05:16:19 -- target/tls.sh@37 -- # return 1 00:20:43.613 05:16:19 -- common/autotest_common.sh@641 -- # es=1 00:20:43.613 05:16:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:43.613 05:16:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:43.613 05:16:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:43.613 05:16:19 -- target/tls.sh@174 -- # killprocess 1910089 00:20:43.613 05:16:19 -- common/autotest_common.sh@936 -- # '[' -z 1910089 ']' 00:20:43.613 05:16:19 -- common/autotest_common.sh@940 -- # kill -0 1910089 00:20:43.613 05:16:19 -- common/autotest_common.sh@941 -- # uname 00:20:43.613 05:16:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.613 05:16:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1910089 00:20:43.613 05:16:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:43.613 05:16:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:43.614 05:16:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1910089' 00:20:43.614 killing process with pid 1910089 00:20:43.614 05:16:19 -- common/autotest_common.sh@955 -- # kill 1910089 00:20:43.614 [2024-04-24 05:16:19.787798] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.614 05:16:19 -- common/autotest_common.sh@960 -- # wait 1910089 00:20:43.614 05:16:20 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:43.614 05:16:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:43.614 05:16:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.614 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.614 05:16:20 -- nvmf/common.sh@470 -- # nvmfpid=1911772 00:20:43.614 05:16:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.614 05:16:20 -- nvmf/common.sh@471 -- # waitforlisten 1911772 00:20:43.614 05:16:20 -- common/autotest_common.sh@817 -- # '[' -z 1911772 ']' 00:20:43.614 05:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.614 05:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.614 05:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.614 05:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.614 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.614 [2024-04-24 05:16:20.096140] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:43.614 [2024-04-24 05:16:20.096228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.614 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.614 [2024-04-24 05:16:20.140410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:43.614 [2024-04-24 05:16:20.172859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.614 [2024-04-24 05:16:20.267290] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.614 [2024-04-24 05:16:20.267360] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.614 [2024-04-24 05:16:20.267391] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.614 [2024-04-24 05:16:20.267411] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.614 [2024-04-24 05:16:20.267422] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.614 [2024-04-24 05:16:20.267472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.614 05:16:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.614 05:16:20 -- common/autotest_common.sh@850 -- # return 0 00:20:43.614 05:16:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.614 05:16:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.614 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.614 05:16:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.614 05:16:20 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:20:43.614 05:16:20 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.614 05:16:20 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:20:43.614 05:16:20 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:20:43.614 05:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.614 05:16:20 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:20:43.614 05:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.614 05:16:20 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:20:43.614 05:16:20 -- target/tls.sh@49 -- # local key=/tmp/tmp.13fwfAlw0c 00:20:43.614 05:16:20 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.614 [2024-04-24 05:16:20.673430] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.614 05:16:20 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.872 05:16:20 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:44.132 [2024-04-24 05:16:21.146709] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.132 [2024-04-24 05:16:21.146966] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.132 05:16:21 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.132 malloc0 00:20:44.391 05:16:21 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.391 05:16:21 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:44.649 [2024-04-24 05:16:21.868069] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:44.649 [2024-04-24 05:16:21.868117] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:44.649 [2024-04-24 05:16:21.868148] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:44.649 request: 00:20:44.649 { 00:20:44.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.649 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.650 "psk": "/tmp/tmp.13fwfAlw0c", 00:20:44.650 "method": "nvmf_subsystem_add_host", 00:20:44.650 "req_id": 1 00:20:44.650 } 00:20:44.650 Got JSON-RPC error response 00:20:44.650 response: 00:20:44.650 { 00:20:44.650 "code": -32603, 00:20:44.650 "message": "Internal error" 00:20:44.650 } 00:20:44.650 05:16:21 -- common/autotest_common.sh@641 -- # es=1 00:20:44.650 05:16:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:44.650 05:16:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:44.650 05:16:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:44.650 05:16:21 -- target/tls.sh@180 -- # killprocess 1911772 00:20:44.650 05:16:21 -- common/autotest_common.sh@936 -- # '[' -z 1911772 ']' 00:20:44.650 05:16:21 -- common/autotest_common.sh@940 -- # kill -0 1911772 00:20:44.650 05:16:21 -- common/autotest_common.sh@941 -- # uname 00:20:44.650 05:16:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.650 05:16:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1911772 00:20:44.650 05:16:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:44.650 05:16:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:44.650 05:16:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1911772' 00:20:44.650 killing process with pid 1911772 00:20:44.650 05:16:21 -- common/autotest_common.sh@955 -- # kill 1911772 00:20:44.650 05:16:21 -- common/autotest_common.sh@960 -- # wait 1911772 00:20:44.909 05:16:22 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.13fwfAlw0c 00:20:44.909 05:16:22 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:44.909 05:16:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:44.909 05:16:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.909 05:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:44.909 05:16:22 -- nvmf/common.sh@470 -- # nvmfpid=1912064 00:20:44.909 05:16:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.909 05:16:22 -- nvmf/common.sh@471 -- # waitforlisten 1912064 00:20:44.909 05:16:22 -- common/autotest_common.sh@817 -- # '[' -z 1912064 ']' 00:20:44.909 05:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.909 05:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.909 05:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.909 05:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.170 05:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:45.170 [2024-04-24 05:16:22.224641] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:45.170 [2024-04-24 05:16:22.224736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.170 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.170 [2024-04-24 05:16:22.262334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:45.170 [2024-04-24 05:16:22.292442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.170 [2024-04-24 05:16:22.382272] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.170 [2024-04-24 05:16:22.382330] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.170 [2024-04-24 05:16:22.382344] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.170 [2024-04-24 05:16:22.382356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.170 [2024-04-24 05:16:22.382366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.170 [2024-04-24 05:16:22.382396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.428 05:16:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:45.428 05:16:22 -- common/autotest_common.sh@850 -- # return 0 00:20:45.428 05:16:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:45.428 05:16:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:45.428 05:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:45.428 05:16:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.428 05:16:22 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:20:45.428 05:16:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.13fwfAlw0c 00:20:45.428 05:16:22 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:45.686 [2024-04-24 05:16:22.740035] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.686 05:16:22 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:45.944 05:16:23 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:46.202 [2024-04-24 05:16:23.233324] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.202 [2024-04-24 05:16:23.233573] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.202 05:16:23 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:46.462 malloc0 00:20:46.462 05:16:23 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:46.721 05:16:23 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:46.721 [2024-04-24 05:16:23.943340] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:46.721 05:16:23 -- target/tls.sh@188 -- # bdevperf_pid=1912229 00:20:46.721 05:16:23 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.721 05:16:23 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.721 05:16:23 -- target/tls.sh@191 -- # waitforlisten 1912229 /var/tmp/bdevperf.sock 00:20:46.721 05:16:23 -- common/autotest_common.sh@817 -- # '[' -z 1912229 ']' 00:20:46.721 05:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.721 05:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.721 05:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.721 05:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.721 05:16:23 -- common/autotest_common.sh@10 -- # set +x 00:20:46.980 [2024-04-24 05:16:24.004431] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:46.980 [2024-04-24 05:16:24.004507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912229 ] 00:20:46.980 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.980 [2024-04-24 05:16:24.036914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:46.980 [2024-04-24 05:16:24.064825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.980 [2024-04-24 05:16:24.149372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.240 05:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.240 05:16:24 -- common/autotest_common.sh@850 -- # return 0 00:20:47.240 05:16:24 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:20:47.240 [2024-04-24 05:16:24.478386] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.240 [2024-04-24 05:16:24.478520] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.499 TLSTESTn1 00:20:47.499 05:16:24 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:47.758 05:16:24 -- target/tls.sh@196 -- # tgtconf='{ 00:20:47.758 "subsystems": [ 00:20:47.758 { 00:20:47.758 "subsystem": "keyring", 00:20:47.758 "config": [] 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "subsystem": "iobuf", 00:20:47.758 "config": [ 00:20:47.758 { 00:20:47.758 "method": "iobuf_set_options", 00:20:47.758 "params": { 00:20:47.758 "small_pool_count": 8192, 00:20:47.758 "large_pool_count": 1024, 00:20:47.758 "small_bufsize": 8192, 00:20:47.758 "large_bufsize": 135168 00:20:47.758 } 00:20:47.758 } 00:20:47.758 ] 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "subsystem": "sock", 00:20:47.758 "config": [ 00:20:47.758 { 00:20:47.758 "method": "sock_impl_set_options", 00:20:47.758 "params": { 00:20:47.758 "impl_name": "posix", 00:20:47.758 "recv_buf_size": 2097152, 00:20:47.758 "send_buf_size": 2097152, 00:20:47.758 "enable_recv_pipe": true, 00:20:47.758 "enable_quickack": false, 00:20:47.758 "enable_placement_id": 0, 00:20:47.758 "enable_zerocopy_send_server": true, 00:20:47.758 "enable_zerocopy_send_client": false, 00:20:47.758 "zerocopy_threshold": 0, 00:20:47.758 "tls_version": 0, 00:20:47.758 "enable_ktls": false 00:20:47.758 } 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "method": "sock_impl_set_options", 00:20:47.758 "params": { 00:20:47.758 "impl_name": "ssl", 00:20:47.758 "recv_buf_size": 4096, 00:20:47.758 "send_buf_size": 4096, 00:20:47.758 "enable_recv_pipe": true, 00:20:47.758 "enable_quickack": false, 00:20:47.758 "enable_placement_id": 0, 00:20:47.758 "enable_zerocopy_send_server": true, 00:20:47.758 "enable_zerocopy_send_client": false, 00:20:47.758 "zerocopy_threshold": 0, 00:20:47.758 "tls_version": 0, 00:20:47.758 "enable_ktls": false 00:20:47.758 } 00:20:47.758 } 00:20:47.758 ] 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "subsystem": "vmd", 00:20:47.758 "config": [] 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "subsystem": "accel", 00:20:47.758 "config": [ 00:20:47.758 { 00:20:47.758 "method": "accel_set_options", 00:20:47.758 "params": { 00:20:47.758 "small_cache_size": 128, 00:20:47.758 "large_cache_size": 16, 00:20:47.758 "task_count": 2048, 00:20:47.758 "sequence_count": 2048, 00:20:47.758 "buf_count": 2048 00:20:47.758 } 00:20:47.758 } 00:20:47.758 ] 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "subsystem": "bdev", 00:20:47.758 "config": [ 00:20:47.758 { 00:20:47.758 "method": "bdev_set_options", 00:20:47.758 "params": { 00:20:47.758 "bdev_io_pool_size": 65535, 00:20:47.758 "bdev_io_cache_size": 256, 00:20:47.758 "bdev_auto_examine": true, 00:20:47.758 "iobuf_small_cache_size": 128, 00:20:47.758 "iobuf_large_cache_size": 16 00:20:47.758 } 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "method": "bdev_raid_set_options", 00:20:47.758 "params": { 00:20:47.758 "process_window_size_kb": 1024 00:20:47.758 } 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "method": "bdev_iscsi_set_options", 00:20:47.758 "params": { 00:20:47.758 "timeout_sec": 30 00:20:47.758 } 00:20:47.758 }, 00:20:47.758 { 00:20:47.758 "method": "bdev_nvme_set_options", 00:20:47.758 "params": { 00:20:47.758 "action_on_timeout": "none", 00:20:47.758 "timeout_us": 0, 00:20:47.758 "timeout_admin_us": 0, 00:20:47.758 "keep_alive_timeout_ms": 10000, 00:20:47.758 "arbitration_burst": 0, 00:20:47.758 "low_priority_weight": 0, 00:20:47.758 "medium_priority_weight": 0, 00:20:47.758 "high_priority_weight": 0, 00:20:47.758 "nvme_adminq_poll_period_us": 10000, 00:20:47.758 "nvme_ioq_poll_period_us": 0, 00:20:47.758 "io_queue_requests": 0, 00:20:47.758 "delay_cmd_submit": true, 00:20:47.758 "transport_retry_count": 4, 00:20:47.758 "bdev_retry_count": 3, 00:20:47.758 "transport_ack_timeout": 0, 00:20:47.758 "ctrlr_loss_timeout_sec": 0, 00:20:47.758 "reconnect_delay_sec": 0, 00:20:47.758 "fast_io_fail_timeout_sec": 0, 00:20:47.758 "disable_auto_failback": false, 00:20:47.758 "generate_uuids": false, 00:20:47.758 "transport_tos": 0, 00:20:47.758 "nvme_error_stat": false, 00:20:47.758 "rdma_srq_size": 0, 00:20:47.758 "io_path_stat": false, 00:20:47.758 "allow_accel_sequence": false, 00:20:47.758 "rdma_max_cq_size": 0, 00:20:47.758 "rdma_cm_event_timeout_ms": 0, 00:20:47.758 "dhchap_digests": [ 00:20:47.758 "sha256", 00:20:47.758 "sha384", 00:20:47.758 "sha512" 00:20:47.758 ], 00:20:47.758 "dhchap_dhgroups": [ 00:20:47.758 "null", 00:20:47.758 "ffdhe2048", 00:20:47.758 "ffdhe3072", 00:20:47.758 "ffdhe4096", 00:20:47.758 "ffdhe6144", 00:20:47.758 "ffdhe8192" 00:20:47.758 ] 00:20:47.758 } 00:20:47.758 }, 00:20:47.758 { 00:20:47.759 "method": "bdev_nvme_set_hotplug", 00:20:47.759 "params": { 00:20:47.759 "period_us": 100000, 00:20:47.759 "enable": false 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "bdev_malloc_create", 00:20:47.759 "params": { 00:20:47.759 "name": "malloc0", 00:20:47.759 "num_blocks": 8192, 00:20:47.759 "block_size": 4096, 00:20:47.759 "physical_block_size": 4096, 00:20:47.759 "uuid": "6cc8ee53-aee1-4e83-b344-37fb85462591", 00:20:47.759 "optimal_io_boundary": 0 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "bdev_wait_for_examine" 00:20:47.759 } 00:20:47.759 ] 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "subsystem": "nbd", 00:20:47.759 "config": [] 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "subsystem": "scheduler", 00:20:47.759 "config": [ 00:20:47.759 { 00:20:47.759 "method": "framework_set_scheduler", 00:20:47.759 "params": { 00:20:47.759 "name": "static" 00:20:47.759 } 00:20:47.759 } 00:20:47.759 ] 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "subsystem": "nvmf", 00:20:47.759 "config": [ 00:20:47.759 { 00:20:47.759 "method": "nvmf_set_config", 00:20:47.759 "params": { 00:20:47.759 "discovery_filter": "match_any", 00:20:47.759 "admin_cmd_passthru": { 00:20:47.759 "identify_ctrlr": false 00:20:47.759 } 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_set_max_subsystems", 00:20:47.759 "params": { 00:20:47.759 "max_subsystems": 1024 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_set_crdt", 00:20:47.759 "params": { 00:20:47.759 "crdt1": 0, 00:20:47.759 "crdt2": 0, 00:20:47.759 "crdt3": 0 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_create_transport", 00:20:47.759 "params": { 00:20:47.759 "trtype": "TCP", 00:20:47.759 "max_queue_depth": 128, 00:20:47.759 "max_io_qpairs_per_ctrlr": 127, 00:20:47.759 "in_capsule_data_size": 4096, 00:20:47.759 "max_io_size": 131072, 00:20:47.759 "io_unit_size": 131072, 00:20:47.759 "max_aq_depth": 128, 00:20:47.759 "num_shared_buffers": 511, 00:20:47.759 "buf_cache_size": 4294967295, 00:20:47.759 "dif_insert_or_strip": false, 00:20:47.759 "zcopy": false, 00:20:47.759 "c2h_success": false, 00:20:47.759 "sock_priority": 0, 00:20:47.759 "abort_timeout_sec": 1, 00:20:47.759 "ack_timeout": 0, 00:20:47.759 "data_wr_pool_size": 0 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_create_subsystem", 00:20:47.759 "params": { 00:20:47.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.759 "allow_any_host": false, 00:20:47.759 "serial_number": "SPDK00000000000001", 00:20:47.759 "model_number": "SPDK bdev Controller", 00:20:47.759 "max_namespaces": 10, 00:20:47.759 "min_cntlid": 1, 00:20:47.759 "max_cntlid": 65519, 00:20:47.759 "ana_reporting": false 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_subsystem_add_host", 00:20:47.759 "params": { 00:20:47.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.759 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.759 "psk": "/tmp/tmp.13fwfAlw0c" 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_subsystem_add_ns", 00:20:47.759 "params": { 00:20:47.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.759 "namespace": { 00:20:47.759 "nsid": 1, 00:20:47.759 "bdev_name": "malloc0", 00:20:47.759 "nguid": "6CC8EE53AEE14E83B34437FB85462591", 00:20:47.759 "uuid": "6cc8ee53-aee1-4e83-b344-37fb85462591", 00:20:47.759 "no_auto_visible": false 00:20:47.759 } 00:20:47.759 } 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "method": "nvmf_subsystem_add_listener", 00:20:47.759 "params": { 00:20:47.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.759 "listen_address": { 00:20:47.759 "trtype": "TCP", 00:20:47.759 "adrfam": "IPv4", 00:20:47.759 "traddr": "10.0.0.2", 00:20:47.759 "trsvcid": "4420" 00:20:47.759 }, 00:20:47.759 "secure_channel": true 00:20:47.759 } 00:20:47.759 } 00:20:47.759 ] 00:20:47.759 } 00:20:47.759 ] 00:20:47.759 }' 00:20:47.759 05:16:24 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:48.021 05:16:25 -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:48.021 "subsystems": [ 00:20:48.021 { 00:20:48.021 "subsystem": "keyring", 00:20:48.021 "config": [] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "iobuf", 00:20:48.021 "config": [ 00:20:48.021 { 00:20:48.021 "method": "iobuf_set_options", 00:20:48.021 "params": { 00:20:48.021 "small_pool_count": 8192, 00:20:48.021 "large_pool_count": 1024, 00:20:48.021 "small_bufsize": 8192, 00:20:48.021 "large_bufsize": 135168 00:20:48.021 } 00:20:48.021 } 00:20:48.021 ] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "sock", 00:20:48.021 "config": [ 00:20:48.021 { 00:20:48.021 "method": "sock_impl_set_options", 00:20:48.021 "params": { 00:20:48.021 "impl_name": "posix", 00:20:48.021 "recv_buf_size": 2097152, 00:20:48.021 "send_buf_size": 2097152, 00:20:48.021 "enable_recv_pipe": true, 00:20:48.021 "enable_quickack": false, 00:20:48.021 "enable_placement_id": 0, 00:20:48.021 "enable_zerocopy_send_server": true, 00:20:48.021 "enable_zerocopy_send_client": false, 00:20:48.021 "zerocopy_threshold": 0, 00:20:48.021 "tls_version": 0, 00:20:48.021 "enable_ktls": false 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "sock_impl_set_options", 00:20:48.021 "params": { 00:20:48.021 "impl_name": "ssl", 00:20:48.021 "recv_buf_size": 4096, 00:20:48.021 "send_buf_size": 4096, 00:20:48.021 "enable_recv_pipe": true, 00:20:48.021 "enable_quickack": false, 00:20:48.021 "enable_placement_id": 0, 00:20:48.021 "enable_zerocopy_send_server": true, 00:20:48.021 "enable_zerocopy_send_client": false, 00:20:48.021 "zerocopy_threshold": 0, 00:20:48.021 "tls_version": 0, 00:20:48.021 "enable_ktls": false 00:20:48.021 } 00:20:48.021 } 00:20:48.021 ] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "vmd", 00:20:48.021 "config": [] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "accel", 00:20:48.021 "config": [ 00:20:48.021 { 00:20:48.021 "method": "accel_set_options", 00:20:48.021 "params": { 00:20:48.021 "small_cache_size": 128, 00:20:48.021 "large_cache_size": 16, 00:20:48.021 "task_count": 2048, 00:20:48.021 "sequence_count": 2048, 00:20:48.021 "buf_count": 2048 00:20:48.021 } 00:20:48.021 } 00:20:48.021 ] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "bdev", 00:20:48.021 "config": [ 00:20:48.021 { 00:20:48.021 "method": "bdev_set_options", 00:20:48.021 "params": { 00:20:48.021 "bdev_io_pool_size": 65535, 00:20:48.021 "bdev_io_cache_size": 256, 00:20:48.021 "bdev_auto_examine": true, 00:20:48.021 "iobuf_small_cache_size": 128, 00:20:48.021 "iobuf_large_cache_size": 16 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_raid_set_options", 00:20:48.021 "params": { 00:20:48.021 "process_window_size_kb": 1024 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_iscsi_set_options", 00:20:48.021 "params": { 00:20:48.021 "timeout_sec": 30 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_nvme_set_options", 00:20:48.021 "params": { 00:20:48.021 "action_on_timeout": "none", 00:20:48.021 "timeout_us": 0, 00:20:48.021 "timeout_admin_us": 0, 00:20:48.021 "keep_alive_timeout_ms": 10000, 00:20:48.021 "arbitration_burst": 0, 00:20:48.021 "low_priority_weight": 0, 00:20:48.021 "medium_priority_weight": 0, 00:20:48.021 "high_priority_weight": 0, 00:20:48.021 "nvme_adminq_poll_period_us": 10000, 00:20:48.021 "nvme_ioq_poll_period_us": 0, 00:20:48.021 "io_queue_requests": 512, 00:20:48.021 "delay_cmd_submit": true, 00:20:48.021 "transport_retry_count": 4, 00:20:48.021 "bdev_retry_count": 3, 00:20:48.021 "transport_ack_timeout": 0, 00:20:48.021 "ctrlr_loss_timeout_sec": 0, 00:20:48.021 "reconnect_delay_sec": 0, 00:20:48.021 "fast_io_fail_timeout_sec": 0, 00:20:48.021 "disable_auto_failback": false, 00:20:48.021 "generate_uuids": false, 00:20:48.021 "transport_tos": 0, 00:20:48.021 "nvme_error_stat": false, 00:20:48.021 "rdma_srq_size": 0, 00:20:48.021 "io_path_stat": false, 00:20:48.021 "allow_accel_sequence": false, 00:20:48.021 "rdma_max_cq_size": 0, 00:20:48.021 "rdma_cm_event_timeout_ms": 0, 00:20:48.021 "dhchap_digests": [ 00:20:48.021 "sha256", 00:20:48.021 "sha384", 00:20:48.021 "sha512" 00:20:48.021 ], 00:20:48.021 "dhchap_dhgroups": [ 00:20:48.021 "null", 00:20:48.021 "ffdhe2048", 00:20:48.021 "ffdhe3072", 00:20:48.021 "ffdhe4096", 00:20:48.021 "ffdhe6144", 00:20:48.021 "ffdhe8192" 00:20:48.021 ] 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_nvme_attach_controller", 00:20:48.021 "params": { 00:20:48.021 "name": "TLSTEST", 00:20:48.021 "trtype": "TCP", 00:20:48.021 "adrfam": "IPv4", 00:20:48.021 "traddr": "10.0.0.2", 00:20:48.021 "trsvcid": "4420", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.021 "prchk_reftag": false, 00:20:48.021 "prchk_guard": false, 00:20:48.021 "ctrlr_loss_timeout_sec": 0, 00:20:48.021 "reconnect_delay_sec": 0, 00:20:48.021 "fast_io_fail_timeout_sec": 0, 00:20:48.021 "psk": "/tmp/tmp.13fwfAlw0c", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.021 "hdgst": false, 00:20:48.021 "ddgst": false 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_nvme_set_hotplug", 00:20:48.021 "params": { 00:20:48.021 "period_us": 100000, 00:20:48.021 "enable": false 00:20:48.021 } 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "method": "bdev_wait_for_examine" 00:20:48.021 } 00:20:48.021 ] 00:20:48.021 }, 00:20:48.021 { 00:20:48.021 "subsystem": "nbd", 00:20:48.021 "config": [] 00:20:48.021 } 00:20:48.021 ] 00:20:48.021 }' 00:20:48.021 05:16:25 -- target/tls.sh@199 -- # killprocess 1912229 00:20:48.021 05:16:25 -- common/autotest_common.sh@936 -- # '[' -z 1912229 ']' 00:20:48.021 05:16:25 -- common/autotest_common.sh@940 -- # kill -0 1912229 00:20:48.021 05:16:25 -- common/autotest_common.sh@941 -- # uname 00:20:48.021 05:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.021 05:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1912229 00:20:48.022 05:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:48.022 05:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:48.022 05:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1912229' 00:20:48.022 killing process with pid 1912229 00:20:48.022 05:16:25 -- common/autotest_common.sh@955 -- # kill 1912229 00:20:48.022 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.022 00:20:48.022 Latency(us) 00:20:48.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.022 =================================================================================================================== 00:20:48.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.022 [2024-04-24 05:16:25.247692] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.022 05:16:25 -- common/autotest_common.sh@960 -- # wait 1912229 00:20:48.289 05:16:25 -- target/tls.sh@200 -- # killprocess 1912064 00:20:48.289 05:16:25 -- common/autotest_common.sh@936 -- # '[' -z 1912064 ']' 00:20:48.289 05:16:25 -- common/autotest_common.sh@940 -- # kill -0 1912064 00:20:48.289 05:16:25 -- common/autotest_common.sh@941 -- # uname 00:20:48.289 05:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.289 05:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1912064 00:20:48.289 05:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:48.289 05:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:48.289 05:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1912064' 00:20:48.289 killing process with pid 1912064 00:20:48.289 05:16:25 -- common/autotest_common.sh@955 -- # kill 1912064 00:20:48.289 [2024-04-24 05:16:25.491559] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.289 05:16:25 -- common/autotest_common.sh@960 -- # wait 1912064 00:20:48.548 05:16:25 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:48.548 05:16:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:48.548 05:16:25 -- target/tls.sh@203 -- # echo '{ 00:20:48.548 "subsystems": [ 00:20:48.548 { 00:20:48.548 "subsystem": "keyring", 00:20:48.548 "config": [] 00:20:48.548 }, 00:20:48.548 { 00:20:48.548 "subsystem": "iobuf", 00:20:48.548 "config": [ 00:20:48.548 { 00:20:48.548 "method": "iobuf_set_options", 00:20:48.548 "params": { 00:20:48.548 "small_pool_count": 8192, 00:20:48.548 "large_pool_count": 1024, 00:20:48.548 "small_bufsize": 8192, 00:20:48.548 "large_bufsize": 135168 00:20:48.548 } 00:20:48.548 } 00:20:48.548 ] 00:20:48.548 }, 00:20:48.548 { 00:20:48.548 "subsystem": "sock", 00:20:48.548 "config": [ 00:20:48.548 { 00:20:48.548 "method": "sock_impl_set_options", 00:20:48.548 "params": { 00:20:48.548 "impl_name": "posix", 00:20:48.548 "recv_buf_size": 2097152, 00:20:48.548 "send_buf_size": 2097152, 00:20:48.548 "enable_recv_pipe": true, 00:20:48.548 "enable_quickack": false, 00:20:48.548 "enable_placement_id": 0, 00:20:48.548 "enable_zerocopy_send_server": true, 00:20:48.548 "enable_zerocopy_send_client": false, 00:20:48.548 "zerocopy_threshold": 0, 00:20:48.548 "tls_version": 0, 00:20:48.548 "enable_ktls": false 00:20:48.548 } 00:20:48.548 }, 00:20:48.548 { 00:20:48.548 "method": "sock_impl_set_options", 00:20:48.548 "params": { 00:20:48.548 "impl_name": "ssl", 00:20:48.548 "recv_buf_size": 4096, 00:20:48.548 "send_buf_size": 4096, 00:20:48.549 "enable_recv_pipe": true, 00:20:48.549 "enable_quickack": false, 00:20:48.549 "enable_placement_id": 0, 00:20:48.549 "enable_zerocopy_send_server": true, 00:20:48.549 "enable_zerocopy_send_client": false, 00:20:48.549 "zerocopy_threshold": 0, 00:20:48.549 "tls_version": 0, 00:20:48.549 "enable_ktls": false 00:20:48.549 } 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "vmd", 00:20:48.549 "config": [] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "accel", 00:20:48.549 "config": [ 00:20:48.549 { 00:20:48.549 "method": "accel_set_options", 00:20:48.549 "params": { 00:20:48.549 "small_cache_size": 128, 00:20:48.549 "large_cache_size": 16, 00:20:48.549 "task_count": 2048, 00:20:48.549 "sequence_count": 2048, 00:20:48.549 "buf_count": 2048 00:20:48.549 } 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "bdev", 00:20:48.549 "config": [ 00:20:48.549 { 00:20:48.549 "method": "bdev_set_options", 00:20:48.549 "params": { 00:20:48.549 "bdev_io_pool_size": 65535, 00:20:48.549 "bdev_io_cache_size": 256, 00:20:48.549 "bdev_auto_examine": true, 00:20:48.549 "iobuf_small_cache_size": 128, 00:20:48.549 "iobuf_large_cache_size": 16 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_raid_set_options", 00:20:48.549 "params": { 00:20:48.549 "process_window_size_kb": 1024 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_iscsi_set_options", 00:20:48.549 "params": { 00:20:48.549 "timeout_sec": 30 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_nvme_set_options", 00:20:48.549 "params": { 00:20:48.549 "action_on_timeout": "none", 00:20:48.549 "timeout_us": 0, 00:20:48.549 "timeout_admin_us": 0, 00:20:48.549 "keep_alive_timeout_ms": 10000, 00:20:48.549 "arbitration_burst": 0, 00:20:48.549 "low_priority_weight": 0, 00:20:48.549 "medium_priority_weight": 0, 00:20:48.549 "high_priority_weight": 0, 00:20:48.549 "nvme_adminq_poll_period_us": 10000, 00:20:48.549 "nvme_ioq_poll_period_us": 0, 00:20:48.549 "io_queue_requests": 0, 00:20:48.549 "delay_cmd_submit": true, 00:20:48.549 "transport_retry_count": 4, 00:20:48.549 "bdev_retry_count": 3, 00:20:48.549 "transport_ack_timeout": 0, 00:20:48.549 "ctrlr_loss_timeout_sec": 0, 00:20:48.549 "reconnect_delay_sec": 0, 00:20:48.549 "fast_io_fail_timeout_sec": 0, 00:20:48.549 "disable_auto_failback": false, 00:20:48.549 "generate_uuids": false, 00:20:48.549 "transport_tos": 0, 00:20:48.549 "nvme_error_stat": false, 00:20:48.549 "rdma_srq_size": 0, 00:20:48.549 "io_path_stat": false, 00:20:48.549 "allow_accel_sequence": false, 00:20:48.549 "rdma_max_cq_size": 0, 00:20:48.549 "rdma_cm_event_timeout_ms": 0, 00:20:48.549 "dhchap_digests": [ 00:20:48.549 "sha256", 00:20:48.549 "sha384", 00:20:48.549 "sha512" 00:20:48.549 ], 00:20:48.549 "dhchap_dhgroups": [ 00:20:48.549 "null", 00:20:48.549 "ffdhe2048", 00:20:48.549 "ffdhe3072", 00:20:48.549 "ffdhe4096", 00:20:48.549 "ffdhe6144", 00:20:48.549 "ffdhe8192" 00:20:48.549 ] 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_nvme_set_hotplug", 00:20:48.549 "params": { 00:20:48.549 "period_us": 100000, 00:20:48.549 "enable": false 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_malloc_create", 00:20:48.549 "params": { 00:20:48.549 "name": "malloc0", 00:20:48.549 "num_blocks": 8192, 00:20:48.549 "block_size": 4096, 00:20:48.549 "physical_block_size": 4096, 00:20:48.549 "uuid": "6cc8ee53-aee1-4e83-b344-37fb85462591", 00:20:48.549 "optimal_io_boundary": 0 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "bdev_wait_for_examine" 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "nbd", 00:20:48.549 "config": [] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "scheduler", 00:20:48.549 "config": [ 00:20:48.549 { 00:20:48.549 "method": "framework_set_scheduler", 00:20:48.549 "params": { 00:20:48.549 "name": "static" 00:20:48.549 } 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "subsystem": "nvmf", 00:20:48.549 "config": [ 00:20:48.549 { 00:20:48.549 "method": "nvmf_set_config", 00:20:48.549 "params": { 00:20:48.549 "discovery_filter": "match_any", 00:20:48.549 "admin_cmd_passthru": { 00:20:48.549 "identify_ctrlr": false 00:20:48.549 } 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_set_max_subsystems", 00:20:48.549 "params": { 00:20:48.549 "max_subsystems": 1024 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_set_crdt", 00:20:48.549 "params": { 00:20:48.549 "crdt1": 0, 00:20:48.549 "crdt2": 0, 00:20:48.549 "crdt3": 0 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_create_transport", 00:20:48.549 "params": { 00:20:48.549 "trtype": "TCP", 00:20:48.549 "max_queue_depth": 128, 00:20:48.549 "max_io_qpairs_per_ctrlr": 127, 00:20:48.549 "in_capsule_data_size": 4096, 00:20:48.549 "max_io_size": 131072, 00:20:48.549 "io_unit_size": 131072, 00:20:48.549 "max_aq_depth": 128, 00:20:48.549 "num_shared_buffers": 511, 00:20:48.549 "buf_cache_size": 4294967295, 00:20:48.549 "dif_insert_or_strip": false, 00:20:48.549 "zcopy": false, 00:20:48.549 "c2h_success": false, 00:20:48.549 "sock_priority": 0, 00:20:48.549 "abort_timeout_sec": 1, 00:20:48.549 "ack_timeout": 0, 00:20:48.549 "data_wr_pool_size": 0 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_create_subsystem", 00:20:48.549 "params": { 00:20:48.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.549 "allow_any_host": false, 00:20:48.549 "serial_number": "SPDK00000000000001", 00:20:48.549 "model_number": "SPDK bdev Controller", 00:20:48.549 "max_namespaces": 10, 00:20:48.549 "min_cntlid": 1, 00:20:48.549 "max_cntlid": 65519, 00:20:48.549 "ana_reporting": false 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_subsystem_add_host", 00:20:48.549 "params": { 00:20:48.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.549 "host": "nqn.2016-06.io.spdk:host1", 00:20:48.549 "psk": "/tmp/tmp.13fwfAlw0c" 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_subsystem_add_ns", 00:20:48.549 "params": { 00:20:48.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.549 "namespace": { 00:20:48.549 "nsid": 1, 00:20:48.549 "bdev_name": "malloc0", 00:20:48.549 "nguid": "6CC8EE53AEE14E83B34437FB85462591", 00:20:48.549 "uuid": "6cc8ee53-aee1-4e83-b344-37fb85462591", 00:20:48.549 "no_auto_visible": false 00:20:48.549 } 00:20:48.549 } 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "method": "nvmf_subsystem_add_listener", 00:20:48.549 "params": { 00:20:48.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.549 "listen_address": { 00:20:48.549 "trtype": "TCP", 00:20:48.549 "adrfam": "IPv4", 00:20:48.549 "traddr": "10.0.0.2", 00:20:48.549 "trsvcid": "4420" 00:20:48.549 }, 00:20:48.549 "secure_channel": true 00:20:48.549 } 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }' 00:20:48.549 05:16:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:48.549 05:16:25 -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 05:16:25 -- nvmf/common.sh@470 -- # nvmfpid=1912510 00:20:48.549 05:16:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:48.549 05:16:25 -- nvmf/common.sh@471 -- # waitforlisten 1912510 00:20:48.549 05:16:25 -- common/autotest_common.sh@817 -- # '[' -z 1912510 ']' 00:20:48.550 05:16:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.550 05:16:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.550 05:16:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.550 05:16:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.550 05:16:25 -- common/autotest_common.sh@10 -- # set +x 00:20:48.550 [2024-04-24 05:16:25.789419] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:48.550 [2024-04-24 05:16:25.789497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.807 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.807 [2024-04-24 05:16:25.826189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:48.807 [2024-04-24 05:16:25.852834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.807 [2024-04-24 05:16:25.934381] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.807 [2024-04-24 05:16:25.934441] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.807 [2024-04-24 05:16:25.934469] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.807 [2024-04-24 05:16:25.934480] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.807 [2024-04-24 05:16:25.934490] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.807 [2024-04-24 05:16:25.934584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.065 [2024-04-24 05:16:26.153165] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.065 [2024-04-24 05:16:26.169116] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:49.065 [2024-04-24 05:16:26.185165] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.065 [2024-04-24 05:16:26.192834] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.632 05:16:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:49.632 05:16:26 -- common/autotest_common.sh@850 -- # return 0 00:20:49.632 05:16:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:49.632 05:16:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:49.632 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:20:49.632 05:16:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.632 05:16:26 -- target/tls.sh@207 -- # bdevperf_pid=1912661 00:20:49.632 05:16:26 -- target/tls.sh@208 -- # waitforlisten 1912661 /var/tmp/bdevperf.sock 00:20:49.632 05:16:26 -- common/autotest_common.sh@817 -- # '[' -z 1912661 ']' 00:20:49.632 05:16:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.632 05:16:26 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:49.633 05:16:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:49.633 05:16:26 -- target/tls.sh@204 -- # echo '{ 00:20:49.633 "subsystems": [ 00:20:49.633 { 00:20:49.633 "subsystem": "keyring", 00:20:49.633 "config": [] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "iobuf", 00:20:49.633 "config": [ 00:20:49.633 { 00:20:49.633 "method": "iobuf_set_options", 00:20:49.633 "params": { 00:20:49.633 "small_pool_count": 8192, 00:20:49.633 "large_pool_count": 1024, 00:20:49.633 "small_bufsize": 8192, 00:20:49.633 "large_bufsize": 135168 00:20:49.633 } 00:20:49.633 } 00:20:49.633 ] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "sock", 00:20:49.633 "config": [ 00:20:49.633 { 00:20:49.633 "method": "sock_impl_set_options", 00:20:49.633 "params": { 00:20:49.633 "impl_name": "posix", 00:20:49.633 "recv_buf_size": 2097152, 00:20:49.633 "send_buf_size": 2097152, 00:20:49.633 "enable_recv_pipe": true, 00:20:49.633 "enable_quickack": false, 00:20:49.633 "enable_placement_id": 0, 00:20:49.633 "enable_zerocopy_send_server": true, 00:20:49.633 "enable_zerocopy_send_client": false, 00:20:49.633 "zerocopy_threshold": 0, 00:20:49.633 "tls_version": 0, 00:20:49.633 "enable_ktls": false 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "sock_impl_set_options", 00:20:49.633 "params": { 00:20:49.633 "impl_name": "ssl", 00:20:49.633 "recv_buf_size": 4096, 00:20:49.633 "send_buf_size": 4096, 00:20:49.633 "enable_recv_pipe": true, 00:20:49.633 "enable_quickack": false, 00:20:49.633 "enable_placement_id": 0, 00:20:49.633 "enable_zerocopy_send_server": true, 00:20:49.633 "enable_zerocopy_send_client": false, 00:20:49.633 "zerocopy_threshold": 0, 00:20:49.633 "tls_version": 0, 00:20:49.633 "enable_ktls": false 00:20:49.633 } 00:20:49.633 } 00:20:49.633 ] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "vmd", 00:20:49.633 "config": [] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "accel", 00:20:49.633 "config": [ 00:20:49.633 { 00:20:49.633 "method": "accel_set_options", 00:20:49.633 "params": { 00:20:49.633 "small_cache_size": 128, 00:20:49.633 "large_cache_size": 16, 00:20:49.633 "task_count": 2048, 00:20:49.633 "sequence_count": 2048, 00:20:49.633 "buf_count": 2048 00:20:49.633 } 00:20:49.633 } 00:20:49.633 ] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "bdev", 00:20:49.633 "config": [ 00:20:49.633 { 00:20:49.633 "method": "bdev_set_options", 00:20:49.633 "params": { 00:20:49.633 "bdev_io_pool_size": 65535, 00:20:49.633 "bdev_io_cache_size": 256, 00:20:49.633 "bdev_auto_examine": true, 00:20:49.633 "iobuf_small_cache_size": 128, 00:20:49.633 "iobuf_large_cache_size": 16 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_raid_set_options", 00:20:49.633 "params": { 00:20:49.633 "process_window_size_kb": 1024 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_iscsi_set_options", 00:20:49.633 "params": { 00:20:49.633 "timeout_sec": 30 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_nvme_set_options", 00:20:49.633 "params": { 00:20:49.633 "action_on_timeout": "none", 00:20:49.633 "timeout_us": 0, 00:20:49.633 "timeout_admin_us": 0, 00:20:49.633 "keep_alive_timeout_ms": 10000, 00:20:49.633 "arbitration_burst": 0, 00:20:49.633 "low_priority_weight": 0, 00:20:49.633 "medium_priority_weight": 0, 00:20:49.633 "high_priority_weight": 0, 00:20:49.633 "nvme_adminq_poll_period_us": 10000, 00:20:49.633 "nvme_ioq_poll_period_us": 0, 00:20:49.633 "io_queue_requests": 512, 00:20:49.633 "delay_cmd_submit": true, 00:20:49.633 "transport_retry_count": 4, 00:20:49.633 "bdev_retry_count": 3, 00:20:49.633 "transport_ack_timeout": 0, 00:20:49.633 "ctrlr_loss_timeout_sec": 0, 00:20:49.633 "reconnect_delay_sec": 0, 00:20:49.633 "fast_io_fail_timeout_sec": 0, 00:20:49.633 "disable_auto_failback": false, 00:20:49.633 "generate_uuids": false, 00:20:49.633 "transport_tos": 0, 00:20:49.633 "nvme_error_stat": false, 00:20:49.633 "rdma_srq_size": 0, 00:20:49.633 "io_path_stat": false, 00:20:49.633 "allow_accel_sequence": false, 00:20:49.633 "rdma_max_cq_size": 0, 00:20:49.633 "rdma_cm_event_timeout_ms": 0, 00:20:49.633 "dhchap_digests": [ 00:20:49.633 "sha256", 00:20:49.633 "sha384", 00:20:49.633 "sha512" 00:20:49.633 ], 00:20:49.633 "dhchap_dhgroups": [ 00:20:49.633 "null", 00:20:49.633 "ffdhe2048", 00:20:49.633 "ffdhe3072", 00:20:49.633 "ffdhe4096", 00:20:49.633 "ffdhe6144", 00:20:49.633 "ffdhe8192" 00:20:49.633 ] 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_nvme_attach_controller", 00:20:49.633 "params": { 00:20:49.633 "name": "TLSTEST", 00:20:49.633 "trtype": "TCP", 00:20:49.633 "adrfam": "IPv4", 00:20:49.633 "traddr": "10.0.0.2", 00:20:49.633 "trsvcid": "4420", 00:20:49.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.633 "prchk_reftag": false, 00:20:49.633 "prchk_guard": false, 00:20:49.633 "ctrlr_loss_timeout_sec": 0, 00:20:49.633 "reconnect_delay_sec": 0, 00:20:49.633 "fast_io_fail_timeout_sec": 0, 00:20:49.633 "psk": "/tmp/tmp.13fwfAlw0c", 00:20:49.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.633 "hdgst": false, 00:20:49.633 "ddgst": false 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_nvme_set_hotplug", 00:20:49.633 "params": { 00:20:49.633 "period_us": 100000, 00:20:49.633 "enable": false 00:20:49.633 } 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "method": "bdev_wait_for_examine" 00:20:49.633 } 00:20:49.633 ] 00:20:49.633 }, 00:20:49.633 { 00:20:49.633 "subsystem": "nbd", 00:20:49.633 "config": [] 00:20:49.633 } 00:20:49.633 ] 00:20:49.633 }' 00:20:49.633 05:16:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.633 05:16:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:49.633 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:20:49.633 [2024-04-24 05:16:26.796416] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:20:49.633 [2024-04-24 05:16:26.796488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912661 ] 00:20:49.633 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.633 [2024-04-24 05:16:26.827102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:49.633 [2024-04-24 05:16:26.853811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.893 [2024-04-24 05:16:26.935856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.893 [2024-04-24 05:16:27.096278] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.893 [2024-04-24 05:16:27.096428] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:50.460 05:16:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.719 05:16:27 -- common/autotest_common.sh@850 -- # return 0 00:20:50.719 05:16:27 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:50.719 Running I/O for 10 seconds... 00:21:00.699 00:21:00.699 Latency(us) 00:21:00.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.699 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.699 Verification LBA range: start 0x0 length 0x2000 00:21:00.699 TLSTESTn1 : 10.04 3037.52 11.87 0.00 0.00 42037.01 6456.51 68351.62 00:21:00.699 =================================================================================================================== 00:21:00.699 Total : 3037.52 11.87 0.00 0.00 42037.01 6456.51 68351.62 00:21:00.699 0 00:21:00.699 05:16:37 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.699 05:16:37 -- target/tls.sh@214 -- # killprocess 1912661 00:21:00.699 05:16:37 -- common/autotest_common.sh@936 -- # '[' -z 1912661 ']' 00:21:00.699 05:16:37 -- common/autotest_common.sh@940 -- # kill -0 1912661 00:21:00.699 05:16:37 -- common/autotest_common.sh@941 -- # uname 00:21:00.699 05:16:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.699 05:16:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1912661 00:21:00.699 05:16:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:00.699 05:16:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:00.699 05:16:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1912661' 00:21:00.699 killing process with pid 1912661 00:21:00.699 05:16:37 -- common/autotest_common.sh@955 -- # kill 1912661 00:21:00.699 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.699 00:21:00.699 Latency(us) 00:21:00.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.699 =================================================================================================================== 00:21:00.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.700 [2024-04-24 05:16:37.957798] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.700 05:16:37 -- common/autotest_common.sh@960 -- # wait 1912661 00:21:00.959 05:16:38 -- target/tls.sh@215 -- # killprocess 1912510 00:21:00.959 05:16:38 -- common/autotest_common.sh@936 -- # '[' -z 1912510 ']' 00:21:00.959 05:16:38 -- common/autotest_common.sh@940 -- # kill -0 1912510 00:21:00.959 05:16:38 -- common/autotest_common.sh@941 -- # uname 00:21:00.959 05:16:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.959 05:16:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1912510 00:21:00.959 05:16:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.959 05:16:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.959 05:16:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1912510' 00:21:00.959 killing process with pid 1912510 00:21:00.959 05:16:38 -- common/autotest_common.sh@955 -- # kill 1912510 00:21:00.959 [2024-04-24 05:16:38.214448] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.959 05:16:38 -- common/autotest_common.sh@960 -- # wait 1912510 00:21:01.217 05:16:38 -- target/tls.sh@218 -- # nvmfappstart 00:21:01.217 05:16:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:01.217 05:16:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.217 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:21:01.217 05:16:38 -- nvmf/common.sh@470 -- # nvmfpid=1913992 00:21:01.217 05:16:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.217 05:16:38 -- nvmf/common.sh@471 -- # waitforlisten 1913992 00:21:01.217 05:16:38 -- common/autotest_common.sh@817 -- # '[' -z 1913992 ']' 00:21:01.217 05:16:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.217 05:16:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.217 05:16:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.217 05:16:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.217 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:21:01.476 [2024-04-24 05:16:38.511064] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:01.476 [2024-04-24 05:16:38.511148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.476 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.476 [2024-04-24 05:16:38.547558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:01.476 [2024-04-24 05:16:38.579311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.476 [2024-04-24 05:16:38.664594] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.476 [2024-04-24 05:16:38.664673] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.476 [2024-04-24 05:16:38.664691] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.476 [2024-04-24 05:16:38.664704] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.477 [2024-04-24 05:16:38.664717] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.477 [2024-04-24 05:16:38.664754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.735 05:16:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.735 05:16:38 -- common/autotest_common.sh@850 -- # return 0 00:21:01.735 05:16:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.735 05:16:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.735 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:21:01.735 05:16:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.735 05:16:38 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.13fwfAlw0c 00:21:01.735 05:16:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.13fwfAlw0c 00:21:01.735 05:16:38 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.993 [2024-04-24 05:16:39.075108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.993 05:16:39 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.251 05:16:39 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.509 [2024-04-24 05:16:39.568440] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.509 [2024-04-24 05:16:39.568696] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.509 05:16:39 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.768 malloc0 00:21:02.768 05:16:39 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:03.026 05:16:40 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.13fwfAlw0c 00:21:03.285 [2024-04-24 05:16:40.345618] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:03.285 05:16:40 -- target/tls.sh@222 -- # bdevperf_pid=1914270 00:21:03.285 05:16:40 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:03.285 05:16:40 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.285 05:16:40 -- target/tls.sh@225 -- # waitforlisten 1914270 /var/tmp/bdevperf.sock 00:21:03.285 05:16:40 -- common/autotest_common.sh@817 -- # '[' -z 1914270 ']' 00:21:03.285 05:16:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.285 05:16:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.285 05:16:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.285 05:16:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.285 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:21:03.285 [2024-04-24 05:16:40.409168] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:03.285 [2024-04-24 05:16:40.409241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914270 ] 00:21:03.285 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.285 [2024-04-24 05:16:40.443459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:03.285 [2024-04-24 05:16:40.475557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.543 [2024-04-24 05:16:40.564930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.543 05:16:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:03.543 05:16:40 -- common/autotest_common.sh@850 -- # return 0 00:21:03.543 05:16:40 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.13fwfAlw0c 00:21:03.801 05:16:40 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:04.059 [2024-04-24 05:16:41.139272] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.059 nvme0n1 00:21:04.059 05:16:41 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.319 Running I/O for 1 seconds... 00:21:05.256 00:21:05.256 Latency(us) 00:21:05.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.256 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:05.256 Verification LBA range: start 0x0 length 0x2000 00:21:05.256 nvme0n1 : 1.05 2833.41 11.07 0.00 0.00 44164.23 8107.05 82721.00 00:21:05.256 =================================================================================================================== 00:21:05.256 Total : 2833.41 11.07 0.00 0.00 44164.23 8107.05 82721.00 00:21:05.256 0 00:21:05.256 05:16:42 -- target/tls.sh@234 -- # killprocess 1914270 00:21:05.256 05:16:42 -- common/autotest_common.sh@936 -- # '[' -z 1914270 ']' 00:21:05.256 05:16:42 -- common/autotest_common.sh@940 -- # kill -0 1914270 00:21:05.256 05:16:42 -- common/autotest_common.sh@941 -- # uname 00:21:05.256 05:16:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.256 05:16:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914270 00:21:05.256 05:16:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:05.256 05:16:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:05.256 05:16:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914270' 00:21:05.256 killing process with pid 1914270 00:21:05.256 05:16:42 -- common/autotest_common.sh@955 -- # kill 1914270 00:21:05.256 Received shutdown signal, test time was about 1.000000 seconds 00:21:05.256 00:21:05.256 Latency(us) 00:21:05.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.256 =================================================================================================================== 00:21:05.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.256 05:16:42 -- common/autotest_common.sh@960 -- # wait 1914270 00:21:05.516 05:16:42 -- target/tls.sh@235 -- # killprocess 1913992 00:21:05.516 05:16:42 -- common/autotest_common.sh@936 -- # '[' -z 1913992 ']' 00:21:05.516 05:16:42 -- common/autotest_common.sh@940 -- # kill -0 1913992 00:21:05.516 05:16:42 -- common/autotest_common.sh@941 -- # uname 00:21:05.516 05:16:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.516 05:16:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1913992 00:21:05.516 05:16:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:05.516 05:16:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:05.516 05:16:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1913992' 00:21:05.516 killing process with pid 1913992 00:21:05.516 05:16:42 -- common/autotest_common.sh@955 -- # kill 1913992 00:21:05.516 [2024-04-24 05:16:42.682921] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.516 05:16:42 -- common/autotest_common.sh@960 -- # wait 1913992 00:21:05.773 05:16:42 -- target/tls.sh@238 -- # nvmfappstart 00:21:05.773 05:16:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:05.773 05:16:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:05.773 05:16:42 -- common/autotest_common.sh@10 -- # set +x 00:21:05.773 05:16:42 -- nvmf/common.sh@470 -- # nvmfpid=1914551 00:21:05.773 05:16:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:05.773 05:16:42 -- nvmf/common.sh@471 -- # waitforlisten 1914551 00:21:05.773 05:16:42 -- common/autotest_common.sh@817 -- # '[' -z 1914551 ']' 00:21:05.773 05:16:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.773 05:16:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:05.773 05:16:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.773 05:16:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:05.773 05:16:42 -- common/autotest_common.sh@10 -- # set +x 00:21:05.773 [2024-04-24 05:16:42.988716] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:05.773 [2024-04-24 05:16:42.988794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.773 [2024-04-24 05:16:43.025319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:06.031 [2024-04-24 05:16:43.057262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.031 [2024-04-24 05:16:43.142331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.031 [2024-04-24 05:16:43.142400] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.031 [2024-04-24 05:16:43.142417] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.031 [2024-04-24 05:16:43.142431] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.031 [2024-04-24 05:16:43.142443] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.031 [2024-04-24 05:16:43.142485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.031 05:16:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.031 05:16:43 -- common/autotest_common.sh@850 -- # return 0 00:21:06.031 05:16:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:06.031 05:16:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:06.031 05:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:06.031 05:16:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.031 05:16:43 -- target/tls.sh@239 -- # rpc_cmd 00:21:06.031 05:16:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.031 05:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:06.031 [2024-04-24 05:16:43.280321] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.031 malloc0 00:21:06.289 [2024-04-24 05:16:43.312064] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.289 [2024-04-24 05:16:43.312312] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.289 05:16:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.289 05:16:43 -- target/tls.sh@252 -- # bdevperf_pid=1914575 00:21:06.289 05:16:43 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:06.289 05:16:43 -- target/tls.sh@254 -- # waitforlisten 1914575 /var/tmp/bdevperf.sock 00:21:06.289 05:16:43 -- common/autotest_common.sh@817 -- # '[' -z 1914575 ']' 00:21:06.289 05:16:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.289 05:16:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.289 05:16:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.289 05:16:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.289 05:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:06.289 [2024-04-24 05:16:43.381465] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:06.289 [2024-04-24 05:16:43.381540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914575 ] 00:21:06.289 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.289 [2024-04-24 05:16:43.413230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:06.289 [2024-04-24 05:16:43.443111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.289 [2024-04-24 05:16:43.538198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.547 05:16:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.547 05:16:43 -- common/autotest_common.sh@850 -- # return 0 00:21:06.547 05:16:43 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.13fwfAlw0c 00:21:06.807 05:16:43 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:07.066 [2024-04-24 05:16:44.109939] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.066 nvme0n1 00:21:07.066 05:16:44 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.066 Running I/O for 1 seconds... 00:21:08.447 00:21:08.447 Latency(us) 00:21:08.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.447 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:08.447 Verification LBA range: start 0x0 length 0x2000 00:21:08.447 nvme0n1 : 1.04 2844.46 11.11 0.00 0.00 44241.05 6359.42 71458.51 00:21:08.447 =================================================================================================================== 00:21:08.448 Total : 2844.46 11.11 0.00 0.00 44241.05 6359.42 71458.51 00:21:08.448 0 00:21:08.448 05:16:45 -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:08.448 05:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.448 05:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:08.448 05:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.448 05:16:45 -- target/tls.sh@263 -- # tgtcfg='{ 00:21:08.448 "subsystems": [ 00:21:08.448 { 00:21:08.448 "subsystem": "keyring", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "keyring_file_add_key", 00:21:08.448 "params": { 00:21:08.448 "name": "key0", 00:21:08.448 "path": "/tmp/tmp.13fwfAlw0c" 00:21:08.448 } 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "iobuf", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "iobuf_set_options", 00:21:08.448 "params": { 00:21:08.448 "small_pool_count": 8192, 00:21:08.448 "large_pool_count": 1024, 00:21:08.448 "small_bufsize": 8192, 00:21:08.448 "large_bufsize": 135168 00:21:08.448 } 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "sock", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "sock_impl_set_options", 00:21:08.448 "params": { 00:21:08.448 "impl_name": "posix", 00:21:08.448 "recv_buf_size": 2097152, 00:21:08.448 "send_buf_size": 2097152, 00:21:08.448 "enable_recv_pipe": true, 00:21:08.448 "enable_quickack": false, 00:21:08.448 "enable_placement_id": 0, 00:21:08.448 "enable_zerocopy_send_server": true, 00:21:08.448 "enable_zerocopy_send_client": false, 00:21:08.448 "zerocopy_threshold": 0, 00:21:08.448 "tls_version": 0, 00:21:08.448 "enable_ktls": false 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "sock_impl_set_options", 00:21:08.448 "params": { 00:21:08.448 "impl_name": "ssl", 00:21:08.448 "recv_buf_size": 4096, 00:21:08.448 "send_buf_size": 4096, 00:21:08.448 "enable_recv_pipe": true, 00:21:08.448 "enable_quickack": false, 00:21:08.448 "enable_placement_id": 0, 00:21:08.448 "enable_zerocopy_send_server": true, 00:21:08.448 "enable_zerocopy_send_client": false, 00:21:08.448 "zerocopy_threshold": 0, 00:21:08.448 "tls_version": 0, 00:21:08.448 "enable_ktls": false 00:21:08.448 } 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "vmd", 00:21:08.448 "config": [] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "accel", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "accel_set_options", 00:21:08.448 "params": { 00:21:08.448 "small_cache_size": 128, 00:21:08.448 "large_cache_size": 16, 00:21:08.448 "task_count": 2048, 00:21:08.448 "sequence_count": 2048, 00:21:08.448 "buf_count": 2048 00:21:08.448 } 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "bdev", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "bdev_set_options", 00:21:08.448 "params": { 00:21:08.448 "bdev_io_pool_size": 65535, 00:21:08.448 "bdev_io_cache_size": 256, 00:21:08.448 "bdev_auto_examine": true, 00:21:08.448 "iobuf_small_cache_size": 128, 00:21:08.448 "iobuf_large_cache_size": 16 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_raid_set_options", 00:21:08.448 "params": { 00:21:08.448 "process_window_size_kb": 1024 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_iscsi_set_options", 00:21:08.448 "params": { 00:21:08.448 "timeout_sec": 30 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_nvme_set_options", 00:21:08.448 "params": { 00:21:08.448 "action_on_timeout": "none", 00:21:08.448 "timeout_us": 0, 00:21:08.448 "timeout_admin_us": 0, 00:21:08.448 "keep_alive_timeout_ms": 10000, 00:21:08.448 "arbitration_burst": 0, 00:21:08.448 "low_priority_weight": 0, 00:21:08.448 "medium_priority_weight": 0, 00:21:08.448 "high_priority_weight": 0, 00:21:08.448 "nvme_adminq_poll_period_us": 10000, 00:21:08.448 "nvme_ioq_poll_period_us": 0, 00:21:08.448 "io_queue_requests": 0, 00:21:08.448 "delay_cmd_submit": true, 00:21:08.448 "transport_retry_count": 4, 00:21:08.448 "bdev_retry_count": 3, 00:21:08.448 "transport_ack_timeout": 0, 00:21:08.448 "ctrlr_loss_timeout_sec": 0, 00:21:08.448 "reconnect_delay_sec": 0, 00:21:08.448 "fast_io_fail_timeout_sec": 0, 00:21:08.448 "disable_auto_failback": false, 00:21:08.448 "generate_uuids": false, 00:21:08.448 "transport_tos": 0, 00:21:08.448 "nvme_error_stat": false, 00:21:08.448 "rdma_srq_size": 0, 00:21:08.448 "io_path_stat": false, 00:21:08.448 "allow_accel_sequence": false, 00:21:08.448 "rdma_max_cq_size": 0, 00:21:08.448 "rdma_cm_event_timeout_ms": 0, 00:21:08.448 "dhchap_digests": [ 00:21:08.448 "sha256", 00:21:08.448 "sha384", 00:21:08.448 "sha512" 00:21:08.448 ], 00:21:08.448 "dhchap_dhgroups": [ 00:21:08.448 "null", 00:21:08.448 "ffdhe2048", 00:21:08.448 "ffdhe3072", 00:21:08.448 "ffdhe4096", 00:21:08.448 "ffdhe6144", 00:21:08.448 "ffdhe8192" 00:21:08.448 ] 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_nvme_set_hotplug", 00:21:08.448 "params": { 00:21:08.448 "period_us": 100000, 00:21:08.448 "enable": false 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_malloc_create", 00:21:08.448 "params": { 00:21:08.448 "name": "malloc0", 00:21:08.448 "num_blocks": 8192, 00:21:08.448 "block_size": 4096, 00:21:08.448 "physical_block_size": 4096, 00:21:08.448 "uuid": "4271d250-26e5-4df9-96c9-9cb7e260fdcb", 00:21:08.448 "optimal_io_boundary": 0 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "bdev_wait_for_examine" 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "nbd", 00:21:08.448 "config": [] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "scheduler", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "framework_set_scheduler", 00:21:08.448 "params": { 00:21:08.448 "name": "static" 00:21:08.448 } 00:21:08.448 } 00:21:08.448 ] 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "subsystem": "nvmf", 00:21:08.448 "config": [ 00:21:08.448 { 00:21:08.448 "method": "nvmf_set_config", 00:21:08.448 "params": { 00:21:08.448 "discovery_filter": "match_any", 00:21:08.448 "admin_cmd_passthru": { 00:21:08.448 "identify_ctrlr": false 00:21:08.448 } 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_set_max_subsystems", 00:21:08.448 "params": { 00:21:08.448 "max_subsystems": 1024 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_set_crdt", 00:21:08.448 "params": { 00:21:08.448 "crdt1": 0, 00:21:08.448 "crdt2": 0, 00:21:08.448 "crdt3": 0 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_create_transport", 00:21:08.448 "params": { 00:21:08.448 "trtype": "TCP", 00:21:08.448 "max_queue_depth": 128, 00:21:08.448 "max_io_qpairs_per_ctrlr": 127, 00:21:08.448 "in_capsule_data_size": 4096, 00:21:08.448 "max_io_size": 131072, 00:21:08.448 "io_unit_size": 131072, 00:21:08.448 "max_aq_depth": 128, 00:21:08.448 "num_shared_buffers": 511, 00:21:08.448 "buf_cache_size": 4294967295, 00:21:08.448 "dif_insert_or_strip": false, 00:21:08.448 "zcopy": false, 00:21:08.448 "c2h_success": false, 00:21:08.448 "sock_priority": 0, 00:21:08.448 "abort_timeout_sec": 1, 00:21:08.448 "ack_timeout": 0, 00:21:08.448 "data_wr_pool_size": 0 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_create_subsystem", 00:21:08.448 "params": { 00:21:08.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.448 "allow_any_host": false, 00:21:08.448 "serial_number": "00000000000000000000", 00:21:08.448 "model_number": "SPDK bdev Controller", 00:21:08.448 "max_namespaces": 32, 00:21:08.448 "min_cntlid": 1, 00:21:08.448 "max_cntlid": 65519, 00:21:08.448 "ana_reporting": false 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_subsystem_add_host", 00:21:08.448 "params": { 00:21:08.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.448 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.448 "psk": "key0" 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_subsystem_add_ns", 00:21:08.448 "params": { 00:21:08.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.448 "namespace": { 00:21:08.448 "nsid": 1, 00:21:08.448 "bdev_name": "malloc0", 00:21:08.448 "nguid": "4271D25026E54DF996C99CB7E260FDCB", 00:21:08.448 "uuid": "4271d250-26e5-4df9-96c9-9cb7e260fdcb", 00:21:08.448 "no_auto_visible": false 00:21:08.448 } 00:21:08.448 } 00:21:08.448 }, 00:21:08.448 { 00:21:08.448 "method": "nvmf_subsystem_add_listener", 00:21:08.448 "params": { 00:21:08.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.448 "listen_address": { 00:21:08.448 "trtype": "TCP", 00:21:08.448 "adrfam": "IPv4", 00:21:08.448 "traddr": "10.0.0.2", 00:21:08.449 "trsvcid": "4420" 00:21:08.449 }, 00:21:08.449 "secure_channel": true 00:21:08.449 } 00:21:08.449 } 00:21:08.449 ] 00:21:08.449 } 00:21:08.449 ] 00:21:08.449 }' 00:21:08.449 05:16:45 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:08.708 05:16:45 -- target/tls.sh@264 -- # bperfcfg='{ 00:21:08.708 "subsystems": [ 00:21:08.708 { 00:21:08.708 "subsystem": "keyring", 00:21:08.708 "config": [ 00:21:08.708 { 00:21:08.708 "method": "keyring_file_add_key", 00:21:08.708 "params": { 00:21:08.708 "name": "key0", 00:21:08.708 "path": "/tmp/tmp.13fwfAlw0c" 00:21:08.708 } 00:21:08.708 } 00:21:08.708 ] 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "subsystem": "iobuf", 00:21:08.708 "config": [ 00:21:08.708 { 00:21:08.708 "method": "iobuf_set_options", 00:21:08.708 "params": { 00:21:08.708 "small_pool_count": 8192, 00:21:08.708 "large_pool_count": 1024, 00:21:08.708 "small_bufsize": 8192, 00:21:08.708 "large_bufsize": 135168 00:21:08.708 } 00:21:08.708 } 00:21:08.708 ] 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "subsystem": "sock", 00:21:08.708 "config": [ 00:21:08.708 { 00:21:08.708 "method": "sock_impl_set_options", 00:21:08.708 "params": { 00:21:08.708 "impl_name": "posix", 00:21:08.708 "recv_buf_size": 2097152, 00:21:08.708 "send_buf_size": 2097152, 00:21:08.708 "enable_recv_pipe": true, 00:21:08.708 "enable_quickack": false, 00:21:08.708 "enable_placement_id": 0, 00:21:08.708 "enable_zerocopy_send_server": true, 00:21:08.708 "enable_zerocopy_send_client": false, 00:21:08.708 "zerocopy_threshold": 0, 00:21:08.708 "tls_version": 0, 00:21:08.708 "enable_ktls": false 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "sock_impl_set_options", 00:21:08.708 "params": { 00:21:08.708 "impl_name": "ssl", 00:21:08.708 "recv_buf_size": 4096, 00:21:08.708 "send_buf_size": 4096, 00:21:08.708 "enable_recv_pipe": true, 00:21:08.708 "enable_quickack": false, 00:21:08.708 "enable_placement_id": 0, 00:21:08.708 "enable_zerocopy_send_server": true, 00:21:08.708 "enable_zerocopy_send_client": false, 00:21:08.708 "zerocopy_threshold": 0, 00:21:08.708 "tls_version": 0, 00:21:08.708 "enable_ktls": false 00:21:08.708 } 00:21:08.708 } 00:21:08.708 ] 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "subsystem": "vmd", 00:21:08.708 "config": [] 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "subsystem": "accel", 00:21:08.708 "config": [ 00:21:08.708 { 00:21:08.708 "method": "accel_set_options", 00:21:08.708 "params": { 00:21:08.708 "small_cache_size": 128, 00:21:08.708 "large_cache_size": 16, 00:21:08.708 "task_count": 2048, 00:21:08.708 "sequence_count": 2048, 00:21:08.708 "buf_count": 2048 00:21:08.708 } 00:21:08.708 } 00:21:08.708 ] 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "subsystem": "bdev", 00:21:08.708 "config": [ 00:21:08.708 { 00:21:08.708 "method": "bdev_set_options", 00:21:08.708 "params": { 00:21:08.708 "bdev_io_pool_size": 65535, 00:21:08.708 "bdev_io_cache_size": 256, 00:21:08.708 "bdev_auto_examine": true, 00:21:08.708 "iobuf_small_cache_size": 128, 00:21:08.708 "iobuf_large_cache_size": 16 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_raid_set_options", 00:21:08.708 "params": { 00:21:08.708 "process_window_size_kb": 1024 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_iscsi_set_options", 00:21:08.708 "params": { 00:21:08.708 "timeout_sec": 30 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_nvme_set_options", 00:21:08.708 "params": { 00:21:08.708 "action_on_timeout": "none", 00:21:08.708 "timeout_us": 0, 00:21:08.708 "timeout_admin_us": 0, 00:21:08.708 "keep_alive_timeout_ms": 10000, 00:21:08.708 "arbitration_burst": 0, 00:21:08.708 "low_priority_weight": 0, 00:21:08.708 "medium_priority_weight": 0, 00:21:08.708 "high_priority_weight": 0, 00:21:08.708 "nvme_adminq_poll_period_us": 10000, 00:21:08.708 "nvme_ioq_poll_period_us": 0, 00:21:08.708 "io_queue_requests": 512, 00:21:08.708 "delay_cmd_submit": true, 00:21:08.708 "transport_retry_count": 4, 00:21:08.708 "bdev_retry_count": 3, 00:21:08.708 "transport_ack_timeout": 0, 00:21:08.708 "ctrlr_loss_timeout_sec": 0, 00:21:08.708 "reconnect_delay_sec": 0, 00:21:08.708 "fast_io_fail_timeout_sec": 0, 00:21:08.708 "disable_auto_failback": false, 00:21:08.708 "generate_uuids": false, 00:21:08.708 "transport_tos": 0, 00:21:08.708 "nvme_error_stat": false, 00:21:08.708 "rdma_srq_size": 0, 00:21:08.708 "io_path_stat": false, 00:21:08.708 "allow_accel_sequence": false, 00:21:08.708 "rdma_max_cq_size": 0, 00:21:08.708 "rdma_cm_event_timeout_ms": 0, 00:21:08.708 "dhchap_digests": [ 00:21:08.708 "sha256", 00:21:08.708 "sha384", 00:21:08.708 "sha512" 00:21:08.708 ], 00:21:08.708 "dhchap_dhgroups": [ 00:21:08.708 "null", 00:21:08.708 "ffdhe2048", 00:21:08.708 "ffdhe3072", 00:21:08.708 "ffdhe4096", 00:21:08.708 "ffdhe6144", 00:21:08.708 "ffdhe8192" 00:21:08.708 ] 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_nvme_attach_controller", 00:21:08.708 "params": { 00:21:08.708 "name": "nvme0", 00:21:08.708 "trtype": "TCP", 00:21:08.708 "adrfam": "IPv4", 00:21:08.708 "traddr": "10.0.0.2", 00:21:08.708 "trsvcid": "4420", 00:21:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.708 "prchk_reftag": false, 00:21:08.708 "prchk_guard": false, 00:21:08.708 "ctrlr_loss_timeout_sec": 0, 00:21:08.708 "reconnect_delay_sec": 0, 00:21:08.708 "fast_io_fail_timeout_sec": 0, 00:21:08.708 "psk": "key0", 00:21:08.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.708 "hdgst": false, 00:21:08.708 "ddgst": false 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_nvme_set_hotplug", 00:21:08.708 "params": { 00:21:08.708 "period_us": 100000, 00:21:08.708 "enable": false 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_enable_histogram", 00:21:08.708 "params": { 00:21:08.708 "name": "nvme0n1", 00:21:08.708 "enable": true 00:21:08.708 } 00:21:08.708 }, 00:21:08.708 { 00:21:08.708 "method": "bdev_wait_for_examine" 00:21:08.708 } 00:21:08.709 ] 00:21:08.709 }, 00:21:08.709 { 00:21:08.709 "subsystem": "nbd", 00:21:08.709 "config": [] 00:21:08.709 } 00:21:08.709 ] 00:21:08.709 }' 00:21:08.709 05:16:45 -- target/tls.sh@266 -- # killprocess 1914575 00:21:08.709 05:16:45 -- common/autotest_common.sh@936 -- # '[' -z 1914575 ']' 00:21:08.709 05:16:45 -- common/autotest_common.sh@940 -- # kill -0 1914575 00:21:08.709 05:16:45 -- common/autotest_common.sh@941 -- # uname 00:21:08.709 05:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.709 05:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914575 00:21:08.709 05:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:08.709 05:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:08.709 05:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914575' 00:21:08.709 killing process with pid 1914575 00:21:08.709 05:16:45 -- common/autotest_common.sh@955 -- # kill 1914575 00:21:08.709 Received shutdown signal, test time was about 1.000000 seconds 00:21:08.709 00:21:08.709 Latency(us) 00:21:08.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.709 =================================================================================================================== 00:21:08.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.709 05:16:45 -- common/autotest_common.sh@960 -- # wait 1914575 00:21:08.968 05:16:46 -- target/tls.sh@267 -- # killprocess 1914551 00:21:08.968 05:16:46 -- common/autotest_common.sh@936 -- # '[' -z 1914551 ']' 00:21:08.968 05:16:46 -- common/autotest_common.sh@940 -- # kill -0 1914551 00:21:08.968 05:16:46 -- common/autotest_common.sh@941 -- # uname 00:21:08.968 05:16:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.968 05:16:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914551 00:21:08.968 05:16:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:08.968 05:16:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:08.968 05:16:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914551' 00:21:08.968 killing process with pid 1914551 00:21:08.968 05:16:46 -- common/autotest_common.sh@955 -- # kill 1914551 00:21:08.968 05:16:46 -- common/autotest_common.sh@960 -- # wait 1914551 00:21:09.227 05:16:46 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:09.227 05:16:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:09.227 05:16:46 -- target/tls.sh@269 -- # echo '{ 00:21:09.227 "subsystems": [ 00:21:09.227 { 00:21:09.227 "subsystem": "keyring", 00:21:09.227 "config": [ 00:21:09.227 { 00:21:09.227 "method": "keyring_file_add_key", 00:21:09.227 "params": { 00:21:09.227 "name": "key0", 00:21:09.227 "path": "/tmp/tmp.13fwfAlw0c" 00:21:09.227 } 00:21:09.227 } 00:21:09.227 ] 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "subsystem": "iobuf", 00:21:09.227 "config": [ 00:21:09.227 { 00:21:09.227 "method": "iobuf_set_options", 00:21:09.227 "params": { 00:21:09.227 "small_pool_count": 8192, 00:21:09.227 "large_pool_count": 1024, 00:21:09.227 "small_bufsize": 8192, 00:21:09.227 "large_bufsize": 135168 00:21:09.227 } 00:21:09.227 } 00:21:09.227 ] 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "subsystem": "sock", 00:21:09.227 "config": [ 00:21:09.227 { 00:21:09.227 "method": "sock_impl_set_options", 00:21:09.227 "params": { 00:21:09.227 "impl_name": "posix", 00:21:09.227 "recv_buf_size": 2097152, 00:21:09.227 "send_buf_size": 2097152, 00:21:09.227 "enable_recv_pipe": true, 00:21:09.227 "enable_quickack": false, 00:21:09.227 "enable_placement_id": 0, 00:21:09.227 "enable_zerocopy_send_server": true, 00:21:09.227 "enable_zerocopy_send_client": false, 00:21:09.227 "zerocopy_threshold": 0, 00:21:09.227 "tls_version": 0, 00:21:09.227 "enable_ktls": false 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "sock_impl_set_options", 00:21:09.227 "params": { 00:21:09.227 "impl_name": "ssl", 00:21:09.227 "recv_buf_size": 4096, 00:21:09.227 "send_buf_size": 4096, 00:21:09.227 "enable_recv_pipe": true, 00:21:09.227 "enable_quickack": false, 00:21:09.227 "enable_placement_id": 0, 00:21:09.227 "enable_zerocopy_send_server": true, 00:21:09.227 "enable_zerocopy_send_client": false, 00:21:09.227 "zerocopy_threshold": 0, 00:21:09.227 "tls_version": 0, 00:21:09.227 "enable_ktls": false 00:21:09.227 } 00:21:09.227 } 00:21:09.227 ] 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "subsystem": "vmd", 00:21:09.227 "config": [] 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "subsystem": "accel", 00:21:09.227 "config": [ 00:21:09.227 { 00:21:09.227 "method": "accel_set_options", 00:21:09.227 "params": { 00:21:09.227 "small_cache_size": 128, 00:21:09.227 "large_cache_size": 16, 00:21:09.227 "task_count": 2048, 00:21:09.227 "sequence_count": 2048, 00:21:09.227 "buf_count": 2048 00:21:09.227 } 00:21:09.227 } 00:21:09.227 ] 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "subsystem": "bdev", 00:21:09.227 "config": [ 00:21:09.227 { 00:21:09.227 "method": "bdev_set_options", 00:21:09.227 "params": { 00:21:09.227 "bdev_io_pool_size": 65535, 00:21:09.227 "bdev_io_cache_size": 256, 00:21:09.227 "bdev_auto_examine": true, 00:21:09.227 "iobuf_small_cache_size": 128, 00:21:09.227 "iobuf_large_cache_size": 16 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_raid_set_options", 00:21:09.227 "params": { 00:21:09.227 "process_window_size_kb": 1024 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_iscsi_set_options", 00:21:09.227 "params": { 00:21:09.227 "timeout_sec": 30 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_nvme_set_options", 00:21:09.227 "params": { 00:21:09.227 "action_on_timeout": "none", 00:21:09.227 "timeout_us": 0, 00:21:09.227 "timeout_admin_us": 0, 00:21:09.227 "keep_alive_timeout_ms": 10000, 00:21:09.227 "arbitration_burst": 0, 00:21:09.227 "low_priority_weight": 0, 00:21:09.227 "medium_priority_weight": 0, 00:21:09.227 "high_priority_weight": 0, 00:21:09.227 "nvme_adminq_poll_period_us": 10000, 00:21:09.227 "nvme_ioq_poll_period_us": 0, 00:21:09.227 "io_queue_requests": 0, 00:21:09.227 "delay_cmd_submit": true, 00:21:09.227 "transport_retry_count": 4, 00:21:09.227 "bdev_retry_count": 3, 00:21:09.227 "transport_ack_timeout": 0, 00:21:09.227 "ctrlr_loss_timeout_sec": 0, 00:21:09.227 "reconnect_delay_sec": 0, 00:21:09.227 "fast_io_fail_timeout_sec": 0, 00:21:09.227 "disable_auto_failback": false, 00:21:09.227 "generate_uuids": false, 00:21:09.227 "transport_tos": 0, 00:21:09.227 "nvme_error_stat": false, 00:21:09.227 "rdma_srq_size": 0, 00:21:09.227 "io_path_stat": false, 00:21:09.227 "allow_accel_sequence": false, 00:21:09.227 "rdma_max_cq_size": 0, 00:21:09.227 "rdma_cm_event_timeout_ms": 0, 00:21:09.227 "dhchap_digests": [ 00:21:09.227 "sha256", 00:21:09.227 "sha384", 00:21:09.227 "sha512" 00:21:09.227 ], 00:21:09.227 "dhchap_dhgroups": [ 00:21:09.227 "null", 00:21:09.227 "ffdhe2048", 00:21:09.227 "ffdhe3072", 00:21:09.227 "ffdhe4096", 00:21:09.227 "ffdhe6144", 00:21:09.227 "ffdhe8192" 00:21:09.227 ] 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_nvme_set_hotplug", 00:21:09.227 "params": { 00:21:09.227 "period_us": 100000, 00:21:09.227 "enable": false 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_malloc_create", 00:21:09.227 "params": { 00:21:09.227 "name": "malloc0", 00:21:09.227 "num_blocks": 8192, 00:21:09.227 "block_size": 4096, 00:21:09.227 "physical_block_size": 4096, 00:21:09.227 "uuid": "4271d250-26e5-4df9-96c9-9cb7e260fdcb", 00:21:09.227 "optimal_io_boundary": 0 00:21:09.227 } 00:21:09.227 }, 00:21:09.227 { 00:21:09.227 "method": "bdev_wait_for_examine" 00:21:09.227 } 00:21:09.227 ] 00:21:09.227 }, 00:21:09.227 { 00:21:09.228 "subsystem": "nbd", 00:21:09.228 "config": [] 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "subsystem": "scheduler", 00:21:09.228 "config": [ 00:21:09.228 { 00:21:09.228 "method": "framework_set_scheduler", 00:21:09.228 "params": { 00:21:09.228 "name": "static" 00:21:09.228 } 00:21:09.228 } 00:21:09.228 ] 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "subsystem": "nvmf", 00:21:09.228 "config": [ 00:21:09.228 { 00:21:09.228 "method": "nvmf_set_config", 00:21:09.228 "params": { 00:21:09.228 "discovery_filter": "match_any", 00:21:09.228 "admin_cmd_passthru": { 00:21:09.228 "identify_ctrlr": false 00:21:09.228 } 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_set_max_subsystems", 00:21:09.228 "params": { 00:21:09.228 "max_subsystems": 1024 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_set_crdt", 00:21:09.228 "params": { 00:21:09.228 "crdt1": 0, 00:21:09.228 "crdt2": 0, 00:21:09.228 "crdt3": 0 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_create_transport", 00:21:09.228 "params": { 00:21:09.228 "trtype": "TCP", 00:21:09.228 "max_queue_depth": 128, 00:21:09.228 "max_io_qpairs_per_ctrlr": 127, 00:21:09.228 "in_capsule_data_size": 4096, 00:21:09.228 "max_io_size": 131072, 00:21:09.228 "io_unit_size": 131072, 00:21:09.228 "max_aq_depth": 128, 00:21:09.228 "num_shared_buffers": 511, 00:21:09.228 "buf_cache_size": 4294967295, 00:21:09.228 "dif_insert_or_strip": false, 00:21:09.228 "zcopy": false, 00:21:09.228 "c2h_success": false, 00:21:09.228 "sock_priority": 0, 00:21:09.228 "abort_timeout_sec": 1, 00:21:09.228 "ack_timeout": 0, 00:21:09.228 "data_wr_pool_size": 0 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_create_subsystem", 00:21:09.228 "params": { 00:21:09.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.228 "allow_any_host": false, 00:21:09.228 "serial_number": "00000000000000000000", 00:21:09.228 "model_number": "SPDK bdev Controller", 00:21:09.228 "max_namespaces": 32, 00:21:09.228 "min_cntlid": 1, 00:21:09.228 "max_cntlid": 65519, 00:21:09.228 "ana_reporting": false 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_subsystem_add_host", 00:21:09.228 "params": { 00:21:09.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.228 "host": "nqn.2016-06.io.spdk:host1", 00:21:09.228 "psk": "key0" 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_subsystem_add_ns", 00:21:09.228 "params": { 00:21:09.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.228 "namespace": { 00:21:09.228 "nsid": 1, 00:21:09.228 "bdev_name": "malloc0", 00:21:09.228 "nguid": "4271D25026E54DF996C99CB7E260FDCB", 00:21:09.228 "uuid": "4271d250-26e5-4df9-96c9-9cb7e260fdcb", 00:21:09.228 "no_auto_visible": false 00:21:09.228 } 00:21:09.228 } 00:21:09.228 }, 00:21:09.228 { 00:21:09.228 "method": "nvmf_subsystem_add_listener", 00:21:09.228 "params": { 00:21:09.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.228 "listen_address": { 00:21:09.228 "trtype": "TCP", 00:21:09.228 "adrfam": "IPv4", 00:21:09.228 "traddr": "10.0.0.2", 00:21:09.228 "trsvcid": "4420" 00:21:09.228 }, 00:21:09.228 "secure_channel": true 00:21:09.228 } 00:21:09.228 } 00:21:09.228 ] 00:21:09.228 } 00:21:09.228 ] 00:21:09.228 }' 00:21:09.228 05:16:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.228 05:16:46 -- common/autotest_common.sh@10 -- # set +x 00:21:09.228 05:16:46 -- nvmf/common.sh@470 -- # nvmfpid=1914982 00:21:09.228 05:16:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:09.228 05:16:46 -- nvmf/common.sh@471 -- # waitforlisten 1914982 00:21:09.228 05:16:46 -- common/autotest_common.sh@817 -- # '[' -z 1914982 ']' 00:21:09.228 05:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.228 05:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.228 05:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.228 05:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.228 05:16:46 -- common/autotest_common.sh@10 -- # set +x 00:21:09.228 [2024-04-24 05:16:46.390155] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:09.228 [2024-04-24 05:16:46.390243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.228 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.228 [2024-04-24 05:16:46.426581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:09.228 [2024-04-24 05:16:46.458359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.486 [2024-04-24 05:16:46.544566] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.486 [2024-04-24 05:16:46.544652] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.486 [2024-04-24 05:16:46.544670] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.486 [2024-04-24 05:16:46.544683] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.486 [2024-04-24 05:16:46.544695] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.486 [2024-04-24 05:16:46.544805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.745 [2024-04-24 05:16:46.773480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.745 [2024-04-24 05:16:46.805468] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.745 [2024-04-24 05:16:46.813827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.311 05:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.311 05:16:47 -- common/autotest_common.sh@850 -- # return 0 00:21:10.311 05:16:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:10.311 05:16:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:10.311 05:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:10.311 05:16:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.311 05:16:47 -- target/tls.sh@272 -- # bdevperf_pid=1915134 00:21:10.311 05:16:47 -- target/tls.sh@273 -- # waitforlisten 1915134 /var/tmp/bdevperf.sock 00:21:10.311 05:16:47 -- common/autotest_common.sh@817 -- # '[' -z 1915134 ']' 00:21:10.311 05:16:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.311 05:16:47 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:10.311 05:16:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.311 05:16:47 -- target/tls.sh@270 -- # echo '{ 00:21:10.311 "subsystems": [ 00:21:10.311 { 00:21:10.311 "subsystem": "keyring", 00:21:10.311 "config": [ 00:21:10.311 { 00:21:10.311 "method": "keyring_file_add_key", 00:21:10.311 "params": { 00:21:10.311 "name": "key0", 00:21:10.311 "path": "/tmp/tmp.13fwfAlw0c" 00:21:10.311 } 00:21:10.311 } 00:21:10.311 ] 00:21:10.311 }, 00:21:10.311 { 00:21:10.311 "subsystem": "iobuf", 00:21:10.311 "config": [ 00:21:10.311 { 00:21:10.311 "method": "iobuf_set_options", 00:21:10.311 "params": { 00:21:10.311 "small_pool_count": 8192, 00:21:10.311 "large_pool_count": 1024, 00:21:10.311 "small_bufsize": 8192, 00:21:10.311 "large_bufsize": 135168 00:21:10.312 } 00:21:10.312 } 00:21:10.312 ] 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "subsystem": "sock", 00:21:10.312 "config": [ 00:21:10.312 { 00:21:10.312 "method": "sock_impl_set_options", 00:21:10.312 "params": { 00:21:10.312 "impl_name": "posix", 00:21:10.312 "recv_buf_size": 2097152, 00:21:10.312 "send_buf_size": 2097152, 00:21:10.312 "enable_recv_pipe": true, 00:21:10.312 "enable_quickack": false, 00:21:10.312 "enable_placement_id": 0, 00:21:10.312 "enable_zerocopy_send_server": true, 00:21:10.312 "enable_zerocopy_send_client": false, 00:21:10.312 "zerocopy_threshold": 0, 00:21:10.312 "tls_version": 0, 00:21:10.312 "enable_ktls": false 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "sock_impl_set_options", 00:21:10.312 "params": { 00:21:10.312 "impl_name": "ssl", 00:21:10.312 "recv_buf_size": 4096, 00:21:10.312 "send_buf_size": 4096, 00:21:10.312 "enable_recv_pipe": true, 00:21:10.312 "enable_quickack": false, 00:21:10.312 "enable_placement_id": 0, 00:21:10.312 "enable_zerocopy_send_server": true, 00:21:10.312 "enable_zerocopy_send_client": false, 00:21:10.312 "zerocopy_threshold": 0, 00:21:10.312 "tls_version": 0, 00:21:10.312 "enable_ktls": false 00:21:10.312 } 00:21:10.312 } 00:21:10.312 ] 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "subsystem": "vmd", 00:21:10.312 "config": [] 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "subsystem": "accel", 00:21:10.312 "config": [ 00:21:10.312 { 00:21:10.312 "method": "accel_set_options", 00:21:10.312 "params": { 00:21:10.312 "small_cache_size": 128, 00:21:10.312 "large_cache_size": 16, 00:21:10.312 "task_count": 2048, 00:21:10.312 "sequence_count": 2048, 00:21:10.312 "buf_count": 2048 00:21:10.312 } 00:21:10.312 } 00:21:10.312 ] 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "subsystem": "bdev", 00:21:10.312 "config": [ 00:21:10.312 { 00:21:10.312 "method": "bdev_set_options", 00:21:10.312 "params": { 00:21:10.312 "bdev_io_pool_size": 65535, 00:21:10.312 "bdev_io_cache_size": 256, 00:21:10.312 "bdev_auto_examine": true, 00:21:10.312 "iobuf_small_cache_size": 128, 00:21:10.312 "iobuf_large_cache_size": 16 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_raid_set_options", 00:21:10.312 "params": { 00:21:10.312 "process_window_size_kb": 1024 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_iscsi_set_options", 00:21:10.312 "params": { 00:21:10.312 "timeout_sec": 30 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_nvme_set_options", 00:21:10.312 "params": { 00:21:10.312 "action_on_timeout": "none", 00:21:10.312 "timeout_us": 0, 00:21:10.312 "timeout_admin_us": 0, 00:21:10.312 "keep_alive_timeout_ms": 10000, 00:21:10.312 "arbitration_burst": 0, 00:21:10.312 "low_priority_weight": 0, 00:21:10.312 "medium_priority_weight": 0, 00:21:10.312 "high_priority_weight": 0, 00:21:10.312 "nvme_adminq_poll_period_us": 10000, 00:21:10.312 "nvme_ioq_poll_period_us": 0, 00:21:10.312 "io_queue_requests": 512, 00:21:10.312 "delay_cmd_submit": true, 00:21:10.312 "transport_retry_count": 4, 00:21:10.312 "bdev_retry_count": 3, 00:21:10.312 "transport_ack_timeout": 0, 00:21:10.312 "ctrlr_loss_timeout_sec": 0, 00:21:10.312 "reconnect_delay_sec": 0, 00:21:10.312 "fast_io_fail_timeout_sec": 0, 00:21:10.312 "disable_auto_failback": false, 00:21:10.312 "generate_uuids": false, 00:21:10.312 "transport_tos": 0, 00:21:10.312 "nvme_error_stat": false, 00:21:10.312 "rdma_srq_size": 0, 00:21:10.312 "io_path_stat": false, 00:21:10.312 "allow_accel_sequence": false, 00:21:10.312 "rdma_max_cq_size": 0, 00:21:10.312 "rdma_cm_event_timeout_ms": 0, 00:21:10.312 "dhchap_digests": [ 00:21:10.312 "sha256", 00:21:10.312 "sha384", 00:21:10.312 "sha512" 00:21:10.312 ], 00:21:10.312 "dhchap_dhgroups": [ 00:21:10.312 "null", 00:21:10.312 "ffdhe2048", 00:21:10.312 "ffdhe3072", 00:21:10.312 "ffdhe4096", 00:21:10.312 "ffdhe6144", 00:21:10.312 "ffdhe8192" 00:21:10.312 ] 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_nvme_attach_controller", 00:21:10.312 "params": { 00:21:10.312 "name": "nvme0", 00:21:10.312 "trtype": "TCP", 00:21:10.312 "adrfam": "IPv4", 00:21:10.312 "traddr": "10.0.0.2", 00:21:10.312 "trsvcid": "4420", 00:21:10.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.312 "prchk_reftag": false, 00:21:10.312 "prchk_guard": false, 00:21:10.312 "ctrlr_loss_timeout_sec": 0, 00:21:10.312 "reconnect_delay_sec": 0, 00:21:10.312 "fast_io_fail_timeout_sec": 0, 00:21:10.312 "psk": "key0", 00:21:10.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.312 "hdgst": false, 00:21:10.312 "ddgst": false 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_nvme_set_hotplug", 00:21:10.312 "params": { 00:21:10.312 "period_us": 100000, 00:21:10.312 "enable": false 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_enable_histogram", 00:21:10.312 "params": { 00:21:10.312 "name": "nvme0n1", 00:21:10.312 "enable": true 00:21:10.312 } 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "method": "bdev_wait_for_examine" 00:21:10.312 } 00:21:10.312 ] 00:21:10.312 }, 00:21:10.312 { 00:21:10.312 "subsystem": "nbd", 00:21:10.312 "config": [] 00:21:10.312 } 00:21:10.312 ] 00:21:10.312 }' 00:21:10.312 05:16:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.312 05:16:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.312 05:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:10.312 [2024-04-24 05:16:47.421960] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:10.312 [2024-04-24 05:16:47.422035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915134 ] 00:21:10.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.312 [2024-04-24 05:16:47.453051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:10.312 [2024-04-24 05:16:47.484853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.312 [2024-04-24 05:16:47.575813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.572 [2024-04-24 05:16:47.750597] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.139 05:16:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:11.139 05:16:48 -- common/autotest_common.sh@850 -- # return 0 00:21:11.139 05:16:48 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:11.139 05:16:48 -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:11.397 05:16:48 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.397 05:16:48 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.655 Running I/O for 1 seconds... 00:21:12.593 00:21:12.593 Latency(us) 00:21:12.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.593 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.593 Verification LBA range: start 0x0 length 0x2000 00:21:12.593 nvme0n1 : 1.04 2899.71 11.33 0.00 0.00 43379.78 6407.96 78060.66 00:21:12.593 =================================================================================================================== 00:21:12.593 Total : 2899.71 11.33 0.00 0.00 43379.78 6407.96 78060.66 00:21:12.593 0 00:21:12.593 05:16:49 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:12.593 05:16:49 -- target/tls.sh@279 -- # cleanup 00:21:12.593 05:16:49 -- target/tls.sh@15 -- # process_shm --id 0 00:21:12.593 05:16:49 -- common/autotest_common.sh@794 -- # type=--id 00:21:12.593 05:16:49 -- common/autotest_common.sh@795 -- # id=0 00:21:12.593 05:16:49 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:12.593 05:16:49 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:12.593 05:16:49 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:12.593 05:16:49 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:12.593 05:16:49 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:12.593 05:16:49 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:12.593 nvmf_trace.0 00:21:12.593 05:16:49 -- common/autotest_common.sh@809 -- # return 0 00:21:12.593 05:16:49 -- target/tls.sh@16 -- # killprocess 1915134 00:21:12.593 05:16:49 -- common/autotest_common.sh@936 -- # '[' -z 1915134 ']' 00:21:12.593 05:16:49 -- common/autotest_common.sh@940 -- # kill -0 1915134 00:21:12.593 05:16:49 -- common/autotest_common.sh@941 -- # uname 00:21:12.593 05:16:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.593 05:16:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1915134 00:21:12.593 05:16:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:12.593 05:16:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:12.593 05:16:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1915134' 00:21:12.593 killing process with pid 1915134 00:21:12.593 05:16:49 -- common/autotest_common.sh@955 -- # kill 1915134 00:21:12.593 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.593 00:21:12.593 Latency(us) 00:21:12.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.593 =================================================================================================================== 00:21:12.593 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.593 05:16:49 -- common/autotest_common.sh@960 -- # wait 1915134 00:21:12.852 05:16:50 -- target/tls.sh@17 -- # nvmftestfini 00:21:12.852 05:16:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:12.852 05:16:50 -- nvmf/common.sh@117 -- # sync 00:21:12.852 05:16:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.852 05:16:50 -- nvmf/common.sh@120 -- # set +e 00:21:12.852 05:16:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.852 05:16:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.852 rmmod nvme_tcp 00:21:12.852 rmmod nvme_fabrics 00:21:13.110 rmmod nvme_keyring 00:21:13.110 05:16:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.110 05:16:50 -- nvmf/common.sh@124 -- # set -e 00:21:13.110 05:16:50 -- nvmf/common.sh@125 -- # return 0 00:21:13.110 05:16:50 -- nvmf/common.sh@478 -- # '[' -n 1914982 ']' 00:21:13.111 05:16:50 -- nvmf/common.sh@479 -- # killprocess 1914982 00:21:13.111 05:16:50 -- common/autotest_common.sh@936 -- # '[' -z 1914982 ']' 00:21:13.111 05:16:50 -- common/autotest_common.sh@940 -- # kill -0 1914982 00:21:13.111 05:16:50 -- common/autotest_common.sh@941 -- # uname 00:21:13.111 05:16:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.111 05:16:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914982 00:21:13.111 05:16:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.111 05:16:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.111 05:16:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914982' 00:21:13.111 killing process with pid 1914982 00:21:13.111 05:16:50 -- common/autotest_common.sh@955 -- # kill 1914982 00:21:13.111 05:16:50 -- common/autotest_common.sh@960 -- # wait 1914982 00:21:13.368 05:16:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:13.368 05:16:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:13.368 05:16:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:13.368 05:16:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.368 05:16:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.368 05:16:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.368 05:16:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.368 05:16:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.273 05:16:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.273 05:16:52 -- target/tls.sh@18 -- # rm -f /tmp/tmp.nIVh4m315a /tmp/tmp.cnUuWqvuQh /tmp/tmp.13fwfAlw0c 00:21:15.273 00:21:15.273 real 1m18.375s 00:21:15.273 user 2m6.291s 00:21:15.273 sys 0m26.556s 00:21:15.273 05:16:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:15.273 05:16:52 -- common/autotest_common.sh@10 -- # set +x 00:21:15.273 ************************************ 00:21:15.273 END TEST nvmf_tls 00:21:15.273 ************************************ 00:21:15.273 05:16:52 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:15.273 05:16:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:15.273 05:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:15.273 05:16:52 -- common/autotest_common.sh@10 -- # set +x 00:21:15.534 ************************************ 00:21:15.534 START TEST nvmf_fips 00:21:15.534 ************************************ 00:21:15.534 05:16:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:15.534 * Looking for test storage... 00:21:15.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:15.534 05:16:52 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.534 05:16:52 -- nvmf/common.sh@7 -- # uname -s 00:21:15.534 05:16:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.534 05:16:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.534 05:16:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.534 05:16:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.534 05:16:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.534 05:16:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.534 05:16:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.534 05:16:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.534 05:16:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.534 05:16:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.534 05:16:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.534 05:16:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.534 05:16:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.534 05:16:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.534 05:16:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.534 05:16:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.534 05:16:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.534 05:16:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.534 05:16:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.534 05:16:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.534 05:16:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.534 05:16:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.534 05:16:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.534 05:16:52 -- paths/export.sh@5 -- # export PATH 00:21:15.534 05:16:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.534 05:16:52 -- nvmf/common.sh@47 -- # : 0 00:21:15.534 05:16:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.534 05:16:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.534 05:16:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.534 05:16:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.534 05:16:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.534 05:16:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.534 05:16:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.534 05:16:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.534 05:16:52 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.534 05:16:52 -- fips/fips.sh@89 -- # check_openssl_version 00:21:15.534 05:16:52 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:15.534 05:16:52 -- fips/fips.sh@85 -- # openssl version 00:21:15.534 05:16:52 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:15.534 05:16:52 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:15.534 05:16:52 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:15.534 05:16:52 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:15.534 05:16:52 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:15.534 05:16:52 -- scripts/common.sh@333 -- # IFS=.-: 00:21:15.534 05:16:52 -- scripts/common.sh@333 -- # read -ra ver1 00:21:15.534 05:16:52 -- scripts/common.sh@334 -- # IFS=.-: 00:21:15.534 05:16:52 -- scripts/common.sh@334 -- # read -ra ver2 00:21:15.534 05:16:52 -- scripts/common.sh@335 -- # local 'op=>=' 00:21:15.534 05:16:52 -- scripts/common.sh@337 -- # ver1_l=3 00:21:15.534 05:16:52 -- scripts/common.sh@338 -- # ver2_l=3 00:21:15.534 05:16:52 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:15.534 05:16:52 -- scripts/common.sh@341 -- # case "$op" in 00:21:15.534 05:16:52 -- scripts/common.sh@345 -- # : 1 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # decimal 3 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=3 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 3 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # ver1[v]=3 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # decimal 3 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=3 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 3 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # ver2[v]=3 00:21:15.534 05:16:52 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:15.534 05:16:52 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v++ )) 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # decimal 0 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=0 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 0 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # ver1[v]=0 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # decimal 0 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=0 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 0 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:15.534 05:16:52 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:15.534 05:16:52 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v++ )) 00:21:15.534 05:16:52 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # decimal 9 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=9 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 9 00:21:15.534 05:16:52 -- scripts/common.sh@362 -- # ver1[v]=9 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # decimal 0 00:21:15.534 05:16:52 -- scripts/common.sh@350 -- # local d=0 00:21:15.534 05:16:52 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:15.534 05:16:52 -- scripts/common.sh@352 -- # echo 0 00:21:15.534 05:16:52 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:15.534 05:16:52 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:15.534 05:16:52 -- scripts/common.sh@364 -- # return 0 00:21:15.534 05:16:52 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:15.534 05:16:52 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:15.534 05:16:52 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:15.534 05:16:52 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:15.534 05:16:52 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:15.534 05:16:52 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:15.534 05:16:52 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:15.534 05:16:52 -- fips/fips.sh@113 -- # build_openssl_config 00:21:15.534 05:16:52 -- fips/fips.sh@37 -- # cat 00:21:15.534 05:16:52 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:15.534 05:16:52 -- fips/fips.sh@58 -- # cat - 00:21:15.534 05:16:52 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:15.534 05:16:52 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:15.534 05:16:52 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:15.534 05:16:52 -- fips/fips.sh@116 -- # openssl list -providers 00:21:15.534 05:16:52 -- fips/fips.sh@116 -- # grep name 00:21:15.534 05:16:52 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:15.534 05:16:52 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:15.534 05:16:52 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:15.534 05:16:52 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:15.534 05:16:52 -- fips/fips.sh@127 -- # : 00:21:15.534 05:16:52 -- common/autotest_common.sh@638 -- # local es=0 00:21:15.534 05:16:52 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:15.534 05:16:52 -- common/autotest_common.sh@626 -- # local arg=openssl 00:21:15.534 05:16:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:15.534 05:16:52 -- common/autotest_common.sh@630 -- # type -t openssl 00:21:15.534 05:16:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:15.534 05:16:52 -- common/autotest_common.sh@632 -- # type -P openssl 00:21:15.534 05:16:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:15.534 05:16:52 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:21:15.534 05:16:52 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:21:15.534 05:16:52 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:21:15.534 Error setting digest 00:21:15.534 00920BCB7C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:15.534 00920BCB7C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:15.534 05:16:52 -- common/autotest_common.sh@641 -- # es=1 00:21:15.534 05:16:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:15.534 05:16:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:15.534 05:16:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:15.534 05:16:52 -- fips/fips.sh@130 -- # nvmftestinit 00:21:15.534 05:16:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:15.534 05:16:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.534 05:16:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:15.534 05:16:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:15.534 05:16:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:15.534 05:16:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.534 05:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.534 05:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.534 05:16:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:15.534 05:16:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:15.534 05:16:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.534 05:16:52 -- common/autotest_common.sh@10 -- # set +x 00:21:17.435 05:16:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:17.435 05:16:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.435 05:16:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.435 05:16:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.435 05:16:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.435 05:16:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.435 05:16:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.435 05:16:54 -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.435 05:16:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.435 05:16:54 -- nvmf/common.sh@296 -- # e810=() 00:21:17.435 05:16:54 -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.435 05:16:54 -- nvmf/common.sh@297 -- # x722=() 00:21:17.435 05:16:54 -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.435 05:16:54 -- nvmf/common.sh@298 -- # mlx=() 00:21:17.435 05:16:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.435 05:16:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.435 05:16:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.435 05:16:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.435 05:16:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.435 05:16:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:17.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:17.435 05:16:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.435 05:16:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:17.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:17.435 05:16:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.435 05:16:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.435 05:16:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.435 05:16:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:17.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:17.435 05:16:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.435 05:16:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.435 05:16:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.435 05:16:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.435 05:16:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:17.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:17.435 05:16:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.435 05:16:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:17.435 05:16:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:17.435 05:16:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:17.435 05:16:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.435 05:16:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.435 05:16:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.435 05:16:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.435 05:16:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.435 05:16:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.435 05:16:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.435 05:16:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.435 05:16:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.435 05:16:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.435 05:16:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.435 05:16:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.435 05:16:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.692 05:16:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.692 05:16:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.692 05:16:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.692 05:16:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.692 05:16:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.692 05:16:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.692 05:16:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:21:17.693 00:21:17.693 --- 10.0.0.2 ping statistics --- 00:21:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.693 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:17.693 05:16:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:21:17.693 00:21:17.693 --- 10.0.0.1 ping statistics --- 00:21:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.693 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:21:17.693 05:16:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.693 05:16:54 -- nvmf/common.sh@411 -- # return 0 00:21:17.693 05:16:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:17.693 05:16:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.693 05:16:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:17.693 05:16:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:17.693 05:16:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.693 05:16:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:17.693 05:16:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:17.693 05:16:54 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:17.693 05:16:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:17.693 05:16:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.693 05:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 05:16:54 -- nvmf/common.sh@470 -- # nvmfpid=1917386 00:21:17.693 05:16:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.693 05:16:54 -- nvmf/common.sh@471 -- # waitforlisten 1917386 00:21:17.693 05:16:54 -- common/autotest_common.sh@817 -- # '[' -z 1917386 ']' 00:21:17.693 05:16:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.693 05:16:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.693 05:16:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.693 05:16:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.693 05:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 [2024-04-24 05:16:54.911001] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:17.693 [2024-04-24 05:16:54.911089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.693 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.693 [2024-04-24 05:16:54.949488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:17.950 [2024-04-24 05:16:54.982156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.950 [2024-04-24 05:16:55.070565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.950 [2024-04-24 05:16:55.070655] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.950 [2024-04-24 05:16:55.070673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.950 [2024-04-24 05:16:55.070687] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.950 [2024-04-24 05:16:55.070699] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.950 [2024-04-24 05:16:55.070735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.883 05:16:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.883 05:16:55 -- common/autotest_common.sh@850 -- # return 0 00:21:18.883 05:16:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:18.883 05:16:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.883 05:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:18.883 05:16:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.883 05:16:55 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:18.883 05:16:55 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:18.883 05:16:55 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.883 05:16:55 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:18.883 05:16:55 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.883 05:16:55 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.883 05:16:55 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.883 05:16:55 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:19.141 [2024-04-24 05:16:56.156550] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.141 [2024-04-24 05:16:56.172538] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.141 [2024-04-24 05:16:56.172780] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.141 [2024-04-24 05:16:56.205011] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.141 malloc0 00:21:19.141 05:16:56 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.141 05:16:56 -- fips/fips.sh@147 -- # bdevperf_pid=1917655 00:21:19.141 05:16:56 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.141 05:16:56 -- fips/fips.sh@148 -- # waitforlisten 1917655 /var/tmp/bdevperf.sock 00:21:19.141 05:16:56 -- common/autotest_common.sh@817 -- # '[' -z 1917655 ']' 00:21:19.141 05:16:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.141 05:16:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.141 05:16:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.141 05:16:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.141 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 [2024-04-24 05:16:56.297751] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:21:19.141 [2024-04-24 05:16:56.297837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917655 ] 00:21:19.141 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.141 [2024-04-24 05:16:56.330176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:19.141 [2024-04-24 05:16:56.359597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.398 [2024-04-24 05:16:56.442255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.398 05:16:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:19.398 05:16:56 -- common/autotest_common.sh@850 -- # return 0 00:21:19.398 05:16:56 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:19.655 [2024-04-24 05:16:56.771021] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.655 [2024-04-24 05:16:56.771169] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:19.655 TLSTESTn1 00:21:19.655 05:16:56 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.912 Running I/O for 10 seconds... 00:21:29.897 00:21:29.897 Latency(us) 00:21:29.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.897 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:29.897 Verification LBA range: start 0x0 length 0x2000 00:21:29.897 TLSTESTn1 : 10.04 2938.78 11.48 0.00 0.00 43450.74 6092.42 65244.73 00:21:29.897 =================================================================================================================== 00:21:29.897 Total : 2938.78 11.48 0.00 0.00 43450.74 6092.42 65244.73 00:21:29.897 0 00:21:29.897 05:17:07 -- fips/fips.sh@1 -- # cleanup 00:21:29.897 05:17:07 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:29.897 05:17:07 -- common/autotest_common.sh@794 -- # type=--id 00:21:29.897 05:17:07 -- common/autotest_common.sh@795 -- # id=0 00:21:29.897 05:17:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:29.897 05:17:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:29.897 05:17:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:29.897 05:17:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:29.897 05:17:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:29.897 05:17:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:29.897 nvmf_trace.0 00:21:29.897 05:17:07 -- common/autotest_common.sh@809 -- # return 0 00:21:29.897 05:17:07 -- fips/fips.sh@16 -- # killprocess 1917655 00:21:29.897 05:17:07 -- common/autotest_common.sh@936 -- # '[' -z 1917655 ']' 00:21:29.897 05:17:07 -- common/autotest_common.sh@940 -- # kill -0 1917655 00:21:29.897 05:17:07 -- common/autotest_common.sh@941 -- # uname 00:21:29.897 05:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.897 05:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1917655 00:21:29.897 05:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:29.897 05:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:29.897 05:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1917655' 00:21:29.897 killing process with pid 1917655 00:21:29.897 05:17:07 -- common/autotest_common.sh@955 -- # kill 1917655 00:21:29.897 Received shutdown signal, test time was about 10.000000 seconds 00:21:29.897 00:21:29.897 Latency(us) 00:21:29.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.897 =================================================================================================================== 00:21:29.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.897 [2024-04-24 05:17:07.127993] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:29.897 05:17:07 -- common/autotest_common.sh@960 -- # wait 1917655 00:21:30.156 05:17:07 -- fips/fips.sh@17 -- # nvmftestfini 00:21:30.156 05:17:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:30.156 05:17:07 -- nvmf/common.sh@117 -- # sync 00:21:30.156 05:17:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.156 05:17:07 -- nvmf/common.sh@120 -- # set +e 00:21:30.156 05:17:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.156 05:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.156 rmmod nvme_tcp 00:21:30.156 rmmod nvme_fabrics 00:21:30.156 rmmod nvme_keyring 00:21:30.156 05:17:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.156 05:17:07 -- nvmf/common.sh@124 -- # set -e 00:21:30.156 05:17:07 -- nvmf/common.sh@125 -- # return 0 00:21:30.156 05:17:07 -- nvmf/common.sh@478 -- # '[' -n 1917386 ']' 00:21:30.156 05:17:07 -- nvmf/common.sh@479 -- # killprocess 1917386 00:21:30.156 05:17:07 -- common/autotest_common.sh@936 -- # '[' -z 1917386 ']' 00:21:30.156 05:17:07 -- common/autotest_common.sh@940 -- # kill -0 1917386 00:21:30.156 05:17:07 -- common/autotest_common.sh@941 -- # uname 00:21:30.156 05:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.156 05:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1917386 00:21:30.156 05:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:30.156 05:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:30.156 05:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1917386' 00:21:30.156 killing process with pid 1917386 00:21:30.156 05:17:07 -- common/autotest_common.sh@955 -- # kill 1917386 00:21:30.156 [2024-04-24 05:17:07.424144] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:30.156 05:17:07 -- common/autotest_common.sh@960 -- # wait 1917386 00:21:30.415 05:17:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:30.415 05:17:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:30.415 05:17:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:30.415 05:17:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.415 05:17:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.415 05:17:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.415 05:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.415 05:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.953 05:17:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.953 05:17:09 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.953 00:21:32.953 real 0m17.100s 00:21:32.953 user 0m21.454s 00:21:32.953 sys 0m6.145s 00:21:32.953 05:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.953 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:21:32.953 ************************************ 00:21:32.953 END TEST nvmf_fips 00:21:32.953 ************************************ 00:21:32.953 05:17:09 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:32.953 05:17:09 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:32.953 05:17:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:32.953 05:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.953 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:21:32.953 ************************************ 00:21:32.953 START TEST nvmf_fuzz 00:21:32.953 ************************************ 00:21:32.953 05:17:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:32.953 * Looking for test storage... 00:21:32.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:32.953 05:17:09 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.953 05:17:09 -- nvmf/common.sh@7 -- # uname -s 00:21:32.953 05:17:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.953 05:17:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.953 05:17:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.953 05:17:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.953 05:17:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.953 05:17:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.953 05:17:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.953 05:17:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.953 05:17:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.953 05:17:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.953 05:17:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.953 05:17:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.953 05:17:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.953 05:17:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.953 05:17:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.953 05:17:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.953 05:17:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.953 05:17:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.953 05:17:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.953 05:17:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.953 05:17:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.953 05:17:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.953 05:17:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.953 05:17:09 -- paths/export.sh@5 -- # export PATH 00:21:32.953 05:17:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.953 05:17:09 -- nvmf/common.sh@47 -- # : 0 00:21:32.953 05:17:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.953 05:17:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.953 05:17:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.953 05:17:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.953 05:17:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.953 05:17:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.953 05:17:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.953 05:17:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.953 05:17:09 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:32.953 05:17:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:32.953 05:17:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.953 05:17:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:32.953 05:17:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:32.953 05:17:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:32.953 05:17:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.953 05:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.953 05:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.953 05:17:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:32.953 05:17:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:32.953 05:17:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.953 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:21:34.860 05:17:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:34.860 05:17:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.860 05:17:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.860 05:17:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.860 05:17:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.860 05:17:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.860 05:17:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.860 05:17:11 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.860 05:17:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.860 05:17:11 -- nvmf/common.sh@296 -- # e810=() 00:21:34.860 05:17:11 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.860 05:17:11 -- nvmf/common.sh@297 -- # x722=() 00:21:34.860 05:17:11 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.860 05:17:11 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.860 05:17:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.860 05:17:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.860 05:17:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.860 05:17:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:34.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:34.860 05:17:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.860 05:17:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:34.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:34.860 05:17:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.860 05:17:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.860 05:17:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.860 05:17:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:34.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:34.860 05:17:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.860 05:17:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.860 05:17:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.860 05:17:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:34.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:34.860 05:17:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:34.860 05:17:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:34.860 05:17:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.860 05:17:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.860 05:17:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.860 05:17:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.860 05:17:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.860 05:17:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.860 05:17:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.860 05:17:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.860 05:17:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.860 05:17:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.860 05:17:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.860 05:17:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.860 05:17:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.860 05:17:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.860 05:17:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.860 05:17:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.860 05:17:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.860 05:17:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.860 05:17:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:34.860 00:21:34.860 --- 10.0.0.2 ping statistics --- 00:21:34.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.860 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:34.860 05:17:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:21:34.860 00:21:34.860 --- 10.0.0.1 ping statistics --- 00:21:34.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.860 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:34.860 05:17:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.860 05:17:11 -- nvmf/common.sh@411 -- # return 0 00:21:34.860 05:17:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:34.860 05:17:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.860 05:17:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:34.860 05:17:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.860 05:17:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:34.860 05:17:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:34.860 05:17:11 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1920851 00:21:34.860 05:17:11 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:34.861 05:17:11 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:34.861 05:17:11 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1920851 00:21:34.861 05:17:11 -- common/autotest_common.sh@817 -- # '[' -z 1920851 ']' 00:21:34.861 05:17:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.861 05:17:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.861 05:17:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.861 05:17:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.861 05:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 05:17:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.119 05:17:12 -- common/autotest_common.sh@850 -- # return 0 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.119 05:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.119 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 05:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:35.119 05:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.119 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 Malloc0 00:21:35.119 05:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.119 05:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.119 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 05:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.119 05:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.119 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 05:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.119 05:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.119 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 05:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:35.119 05:17:12 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:07.210 Fuzzing completed. Shutting down the fuzz application 00:22:07.210 00:22:07.210 Dumping successful admin opcodes: 00:22:07.210 8, 9, 10, 24, 00:22:07.210 Dumping successful io opcodes: 00:22:07.210 0, 9, 00:22:07.210 NS: 0x200003aeff00 I/O qp, Total commands completed: 462091, total successful commands: 2673, random_seed: 450799360 00:22:07.210 NS: 0x200003aeff00 admin qp, Total commands completed: 56432, total successful commands: 448, random_seed: 3690901696 00:22:07.210 05:17:42 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:07.210 Fuzzing completed. Shutting down the fuzz application 00:22:07.210 00:22:07.210 Dumping successful admin opcodes: 00:22:07.210 24, 00:22:07.210 Dumping successful io opcodes: 00:22:07.210 00:22:07.210 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1892651130 00:22:07.210 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1892771418 00:22:07.210 05:17:44 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.210 05:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.210 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:22:07.210 05:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.210 05:17:44 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:07.210 05:17:44 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:07.210 05:17:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:07.210 05:17:44 -- nvmf/common.sh@117 -- # sync 00:22:07.210 05:17:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.210 05:17:44 -- nvmf/common.sh@120 -- # set +e 00:22:07.210 05:17:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.210 05:17:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.210 rmmod nvme_tcp 00:22:07.210 rmmod nvme_fabrics 00:22:07.210 rmmod nvme_keyring 00:22:07.210 05:17:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.210 05:17:44 -- nvmf/common.sh@124 -- # set -e 00:22:07.210 05:17:44 -- nvmf/common.sh@125 -- # return 0 00:22:07.210 05:17:44 -- nvmf/common.sh@478 -- # '[' -n 1920851 ']' 00:22:07.210 05:17:44 -- nvmf/common.sh@479 -- # killprocess 1920851 00:22:07.210 05:17:44 -- common/autotest_common.sh@936 -- # '[' -z 1920851 ']' 00:22:07.210 05:17:44 -- common/autotest_common.sh@940 -- # kill -0 1920851 00:22:07.210 05:17:44 -- common/autotest_common.sh@941 -- # uname 00:22:07.210 05:17:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:07.210 05:17:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1920851 00:22:07.210 05:17:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:07.211 05:17:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:07.211 05:17:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1920851' 00:22:07.211 killing process with pid 1920851 00:22:07.211 05:17:44 -- common/autotest_common.sh@955 -- # kill 1920851 00:22:07.211 05:17:44 -- common/autotest_common.sh@960 -- # wait 1920851 00:22:07.489 05:17:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:07.489 05:17:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:07.489 05:17:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:07.489 05:17:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.489 05:17:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.489 05:17:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.489 05:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.489 05:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.394 05:17:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.394 05:17:46 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:09.394 00:22:09.394 real 0m36.742s 00:22:09.394 user 0m49.852s 00:22:09.394 sys 0m15.448s 00:22:09.394 05:17:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:09.394 05:17:46 -- common/autotest_common.sh@10 -- # set +x 00:22:09.394 ************************************ 00:22:09.394 END TEST nvmf_fuzz 00:22:09.394 ************************************ 00:22:09.394 05:17:46 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:09.394 05:17:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:09.394 05:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:09.394 05:17:46 -- common/autotest_common.sh@10 -- # set +x 00:22:09.653 ************************************ 00:22:09.653 START TEST nvmf_multiconnection 00:22:09.653 ************************************ 00:22:09.653 05:17:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:09.653 * Looking for test storage... 00:22:09.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.653 05:17:46 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.653 05:17:46 -- nvmf/common.sh@7 -- # uname -s 00:22:09.653 05:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.653 05:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.653 05:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.653 05:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.653 05:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.653 05:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.653 05:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.653 05:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.653 05:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.653 05:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.653 05:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.653 05:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.653 05:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.653 05:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.653 05:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.653 05:17:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.653 05:17:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.653 05:17:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.653 05:17:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.653 05:17:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.653 05:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.653 05:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.653 05:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.653 05:17:46 -- paths/export.sh@5 -- # export PATH 00:22:09.653 05:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.653 05:17:46 -- nvmf/common.sh@47 -- # : 0 00:22:09.653 05:17:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.653 05:17:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.653 05:17:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.653 05:17:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.653 05:17:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.653 05:17:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.653 05:17:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.653 05:17:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.653 05:17:46 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.653 05:17:46 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.653 05:17:46 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:09.653 05:17:46 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:09.653 05:17:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:09.653 05:17:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.653 05:17:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:09.653 05:17:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:09.653 05:17:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:09.653 05:17:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.653 05:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.653 05:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.653 05:17:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:09.653 05:17:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:09.653 05:17:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.653 05:17:46 -- common/autotest_common.sh@10 -- # set +x 00:22:11.557 05:17:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.557 05:17:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.557 05:17:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.557 05:17:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.557 05:17:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.557 05:17:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.557 05:17:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.557 05:17:48 -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.557 05:17:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.557 05:17:48 -- nvmf/common.sh@296 -- # e810=() 00:22:11.557 05:17:48 -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.557 05:17:48 -- nvmf/common.sh@297 -- # x722=() 00:22:11.557 05:17:48 -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.557 05:17:48 -- nvmf/common.sh@298 -- # mlx=() 00:22:11.557 05:17:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.557 05:17:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.557 05:17:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.557 05:17:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.557 05:17:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.557 05:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.557 05:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.557 05:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.557 05:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.557 05:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.557 05:17:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.557 05:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.557 05:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.557 05:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.557 05:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.557 05:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.557 05:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.557 05:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.557 05:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.557 05:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.557 05:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.557 05:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.557 05:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.558 05:17:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:11.558 05:17:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:11.558 05:17:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:11.558 05:17:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:11.558 05:17:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:11.558 05:17:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.558 05:17:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.558 05:17:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.558 05:17:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.558 05:17:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.558 05:17:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.558 05:17:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.558 05:17:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.558 05:17:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.558 05:17:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.558 05:17:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.558 05:17:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.558 05:17:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.558 05:17:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.558 05:17:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.558 05:17:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.558 05:17:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.558 05:17:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.558 05:17:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.558 05:17:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:11.558 00:22:11.558 --- 10.0.0.2 ping statistics --- 00:22:11.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.558 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:11.558 05:17:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:22:11.816 00:22:11.816 --- 10.0.0.1 ping statistics --- 00:22:11.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.816 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:11.816 05:17:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.816 05:17:48 -- nvmf/common.sh@411 -- # return 0 00:22:11.816 05:17:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:11.816 05:17:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.816 05:17:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:11.816 05:17:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:11.816 05:17:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.816 05:17:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:11.816 05:17:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:11.816 05:17:48 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:11.816 05:17:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:11.816 05:17:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:11.816 05:17:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.816 05:17:48 -- nvmf/common.sh@470 -- # nvmfpid=1926528 00:22:11.816 05:17:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:11.816 05:17:48 -- nvmf/common.sh@471 -- # waitforlisten 1926528 00:22:11.816 05:17:48 -- common/autotest_common.sh@817 -- # '[' -z 1926528 ']' 00:22:11.816 05:17:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.816 05:17:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:11.816 05:17:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.816 05:17:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:11.816 05:17:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.816 [2024-04-24 05:17:48.902956] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:22:11.816 [2024-04-24 05:17:48.903040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.816 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.816 [2024-04-24 05:17:48.946588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:11.816 [2024-04-24 05:17:48.977275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.816 [2024-04-24 05:17:49.067663] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.816 [2024-04-24 05:17:49.067750] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.816 [2024-04-24 05:17:49.067767] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.816 [2024-04-24 05:17:49.067781] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.816 [2024-04-24 05:17:49.067793] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.816 [2024-04-24 05:17:49.067958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.816 [2024-04-24 05:17:49.068034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.816 [2024-04-24 05:17:49.068124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.816 [2024-04-24 05:17:49.068125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.074 05:17:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.074 05:17:49 -- common/autotest_common.sh@850 -- # return 0 00:22:12.074 05:17:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:12.074 05:17:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 05:17:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.074 05:17:49 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 [2024-04-24 05:17:49.205178] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:12.074 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.074 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 Malloc1 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 [2024-04-24 05:17:49.260138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.074 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 Malloc2 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.074 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:12.074 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.074 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.075 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:12.075 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.075 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.075 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.075 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:12.075 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.075 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.075 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.075 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.075 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:12.075 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.075 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.075 Malloc3 00:22:12.075 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.075 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:12.075 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.075 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.075 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.075 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:12.075 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.075 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.333 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 Malloc4 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.333 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 Malloc5 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:12.333 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.334 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 Malloc6 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.334 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 Malloc7 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.334 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 Malloc8 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.594 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 Malloc9 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.594 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 Malloc10 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.594 05:17:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 Malloc11 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:12.594 05:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.594 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:12.594 05:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.594 05:17:49 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:12.594 05:17:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.594 05:17:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:13.163 05:17:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:13.163 05:17:50 -- common/autotest_common.sh@1184 -- # local i=0 00:22:13.163 05:17:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.163 05:17:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:13.163 05:17:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:15.695 05:17:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:15.695 05:17:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:15.695 05:17:52 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:22:15.695 05:17:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:15.695 05:17:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.695 05:17:52 -- common/autotest_common.sh@1194 -- # return 0 00:22:15.695 05:17:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.695 05:17:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:15.955 05:17:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:15.955 05:17:52 -- common/autotest_common.sh@1184 -- # local i=0 00:22:15.955 05:17:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.955 05:17:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:15.955 05:17:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:17.862 05:17:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:17.862 05:17:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:17.862 05:17:54 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:22:17.862 05:17:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:17.862 05:17:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.862 05:17:55 -- common/autotest_common.sh@1194 -- # return 0 00:22:17.862 05:17:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.862 05:17:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:18.796 05:17:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:18.796 05:17:55 -- common/autotest_common.sh@1184 -- # local i=0 00:22:18.796 05:17:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.796 05:17:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:18.796 05:17:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:20.703 05:17:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:20.703 05:17:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:20.703 05:17:57 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:22:20.703 05:17:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:20.703 05:17:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.703 05:17:57 -- common/autotest_common.sh@1194 -- # return 0 00:22:20.703 05:17:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.703 05:17:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:21.271 05:17:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:21.271 05:17:58 -- common/autotest_common.sh@1184 -- # local i=0 00:22:21.271 05:17:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.271 05:17:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:21.271 05:17:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:23.236 05:18:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:23.236 05:18:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:23.236 05:18:00 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:22:23.494 05:18:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:23.494 05:18:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.494 05:18:00 -- common/autotest_common.sh@1194 -- # return 0 00:22:23.494 05:18:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.494 05:18:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:24.060 05:18:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:24.060 05:18:01 -- common/autotest_common.sh@1184 -- # local i=0 00:22:24.060 05:18:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.060 05:18:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:24.060 05:18:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:25.960 05:18:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:25.960 05:18:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:25.960 05:18:03 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:22:26.219 05:18:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:26.219 05:18:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.219 05:18:03 -- common/autotest_common.sh@1194 -- # return 0 00:22:26.219 05:18:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.219 05:18:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:26.788 05:18:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:26.789 05:18:03 -- common/autotest_common.sh@1184 -- # local i=0 00:22:26.789 05:18:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:26.789 05:18:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:26.789 05:18:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:28.694 05:18:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:28.694 05:18:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:28.694 05:18:05 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:22:28.694 05:18:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:28.694 05:18:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:28.694 05:18:05 -- common/autotest_common.sh@1194 -- # return 0 00:22:28.694 05:18:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.694 05:18:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:29.629 05:18:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:29.629 05:18:06 -- common/autotest_common.sh@1184 -- # local i=0 00:22:29.629 05:18:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:29.629 05:18:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:29.629 05:18:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:31.530 05:18:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:31.530 05:18:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:31.530 05:18:08 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:22:31.530 05:18:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:31.530 05:18:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:31.530 05:18:08 -- common/autotest_common.sh@1194 -- # return 0 00:22:31.530 05:18:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:31.530 05:18:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:32.465 05:18:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:32.465 05:18:09 -- common/autotest_common.sh@1184 -- # local i=0 00:22:32.465 05:18:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.465 05:18:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:32.465 05:18:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:35.001 05:18:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:35.001 05:18:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:35.001 05:18:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:22:35.001 05:18:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:35.001 05:18:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:35.001 05:18:11 -- common/autotest_common.sh@1194 -- # return 0 00:22:35.001 05:18:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.001 05:18:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:35.571 05:18:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:35.571 05:18:12 -- common/autotest_common.sh@1184 -- # local i=0 00:22:35.571 05:18:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.571 05:18:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:35.571 05:18:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:37.472 05:18:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:37.472 05:18:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:37.472 05:18:14 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:22:37.472 05:18:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:37.472 05:18:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:37.472 05:18:14 -- common/autotest_common.sh@1194 -- # return 0 00:22:37.472 05:18:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.472 05:18:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:38.406 05:18:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:38.407 05:18:15 -- common/autotest_common.sh@1184 -- # local i=0 00:22:38.407 05:18:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:38.407 05:18:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:38.407 05:18:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:40.314 05:18:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:40.314 05:18:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:40.314 05:18:17 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:22:40.314 05:18:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:40.314 05:18:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:40.314 05:18:17 -- common/autotest_common.sh@1194 -- # return 0 00:22:40.314 05:18:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.314 05:18:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:41.254 05:18:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:41.254 05:18:18 -- common/autotest_common.sh@1184 -- # local i=0 00:22:41.254 05:18:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.254 05:18:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:41.254 05:18:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:43.159 05:18:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:43.159 05:18:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:43.159 05:18:20 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:22:43.159 05:18:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:43.159 05:18:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.159 05:18:20 -- common/autotest_common.sh@1194 -- # return 0 00:22:43.159 05:18:20 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:43.159 [global] 00:22:43.159 thread=1 00:22:43.159 invalidate=1 00:22:43.159 rw=read 00:22:43.159 time_based=1 00:22:43.159 runtime=10 00:22:43.159 ioengine=libaio 00:22:43.159 direct=1 00:22:43.159 bs=262144 00:22:43.159 iodepth=64 00:22:43.159 norandommap=1 00:22:43.159 numjobs=1 00:22:43.159 00:22:43.159 [job0] 00:22:43.159 filename=/dev/nvme0n1 00:22:43.159 [job1] 00:22:43.159 filename=/dev/nvme10n1 00:22:43.159 [job2] 00:22:43.159 filename=/dev/nvme1n1 00:22:43.159 [job3] 00:22:43.159 filename=/dev/nvme2n1 00:22:43.159 [job4] 00:22:43.159 filename=/dev/nvme3n1 00:22:43.159 [job5] 00:22:43.159 filename=/dev/nvme4n1 00:22:43.159 [job6] 00:22:43.159 filename=/dev/nvme5n1 00:22:43.159 [job7] 00:22:43.159 filename=/dev/nvme6n1 00:22:43.417 [job8] 00:22:43.417 filename=/dev/nvme7n1 00:22:43.417 [job9] 00:22:43.417 filename=/dev/nvme8n1 00:22:43.417 [job10] 00:22:43.417 filename=/dev/nvme9n1 00:22:43.417 Could not set queue depth (nvme0n1) 00:22:43.417 Could not set queue depth (nvme10n1) 00:22:43.417 Could not set queue depth (nvme1n1) 00:22:43.417 Could not set queue depth (nvme2n1) 00:22:43.417 Could not set queue depth (nvme3n1) 00:22:43.417 Could not set queue depth (nvme4n1) 00:22:43.417 Could not set queue depth (nvme5n1) 00:22:43.417 Could not set queue depth (nvme6n1) 00:22:43.417 Could not set queue depth (nvme7n1) 00:22:43.417 Could not set queue depth (nvme8n1) 00:22:43.417 Could not set queue depth (nvme9n1) 00:22:43.676 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.676 fio-3.35 00:22:43.676 Starting 11 threads 00:22:55.888 00:22:55.888 job0: (groupid=0, jobs=1): err= 0: pid=1931409: Wed Apr 24 05:18:31 2024 00:22:55.888 read: IOPS=724, BW=181MiB/s (190MB/s)(1821MiB/10053msec) 00:22:55.888 slat (usec): min=8, max=96451, avg=984.11, stdev=3541.53 00:22:55.888 clat (usec): min=963, max=216986, avg=87291.28, stdev=34355.05 00:22:55.888 lat (usec): min=988, max=217013, avg=88275.40, stdev=34579.71 00:22:55.888 clat percentiles (msec): 00:22:55.888 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 43], 20.00th=[ 63], 00:22:55.888 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 95], 00:22:55.888 | 70.00th=[ 104], 80.00th=[ 113], 90.00th=[ 125], 95.00th=[ 142], 00:22:55.888 | 99.00th=[ 192], 99.50th=[ 209], 99.90th=[ 218], 99.95th=[ 218], 00:22:55.888 | 99.99th=[ 218] 00:22:55.888 bw ( KiB/s): min=137728, max=248832, per=9.97%, avg=184811.00, stdev=37032.46, samples=20 00:22:55.888 iops : min= 538, max= 972, avg=721.90, stdev=144.65, samples=20 00:22:55.888 lat (usec) : 1000=0.01% 00:22:55.888 lat (msec) : 2=0.03%, 4=0.29%, 10=1.52%, 20=1.28%, 50=10.34% 00:22:55.888 lat (msec) : 100=53.34%, 250=33.19% 00:22:55.888 cpu : usr=0.32%, sys=2.41%, ctx=1690, majf=0, minf=4097 00:22:55.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:55.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.888 issued rwts: total=7283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.888 job1: (groupid=0, jobs=1): err= 0: pid=1931410: Wed Apr 24 05:18:31 2024 00:22:55.888 read: IOPS=696, BW=174MiB/s (183MB/s)(1749MiB/10049msec) 00:22:55.888 slat (usec): min=9, max=184134, avg=710.56, stdev=3745.00 00:22:55.888 clat (usec): min=849, max=296124, avg=91146.73, stdev=46498.61 00:22:55.888 lat (usec): min=866, max=356917, avg=91857.29, stdev=46844.46 00:22:55.888 clat percentiles (msec): 00:22:55.888 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 40], 20.00th=[ 53], 00:22:55.888 | 30.00th=[ 63], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 97], 00:22:55.888 | 70.00th=[ 111], 80.00th=[ 131], 90.00th=[ 159], 95.00th=[ 178], 00:22:55.888 | 99.00th=[ 201], 99.50th=[ 245], 99.90th=[ 292], 99.95th=[ 292], 00:22:55.888 | 99.99th=[ 296] 00:22:55.888 bw ( KiB/s): min=96768, max=302080, per=9.57%, avg=177466.10, stdev=51740.17, samples=20 00:22:55.888 iops : min= 378, max= 1180, avg=693.20, stdev=202.11, samples=20 00:22:55.888 lat (usec) : 1000=0.04% 00:22:55.888 lat (msec) : 2=0.04%, 4=0.33%, 10=0.90%, 20=2.43%, 50=14.62% 00:22:55.889 lat (msec) : 100=44.31%, 250=36.92%, 500=0.40% 00:22:55.889 cpu : usr=0.31%, sys=2.25%, ctx=1773, majf=0, minf=4097 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=6996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job2: (groupid=0, jobs=1): err= 0: pid=1931411: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=679, BW=170MiB/s (178MB/s)(1713MiB/10079msec) 00:22:55.889 slat (usec): min=8, max=98931, avg=870.85, stdev=3696.53 00:22:55.889 clat (usec): min=1999, max=221258, avg=93177.89, stdev=44743.61 00:22:55.889 lat (msec): min=2, max=283, avg=94.05, stdev=45.10 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 24], 20.00th=[ 58], 00:22:55.889 | 30.00th=[ 72], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 105], 00:22:55.889 | 70.00th=[ 113], 80.00th=[ 131], 90.00th=[ 150], 95.00th=[ 165], 00:22:55.889 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 207], 99.95th=[ 211], 00:22:55.889 | 99.99th=[ 222] 00:22:55.889 bw ( KiB/s): min=124416, max=264192, per=9.37%, avg=173810.05, stdev=41754.94, samples=20 00:22:55.889 iops : min= 486, max= 1032, avg=678.90, stdev=163.14, samples=20 00:22:55.889 lat (msec) : 2=0.01%, 4=1.28%, 10=4.19%, 20=3.33%, 50=7.43% 00:22:55.889 lat (msec) : 100=36.63%, 250=47.13% 00:22:55.889 cpu : usr=0.35%, sys=2.15%, ctx=1794, majf=0, minf=4097 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=6853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job3: (groupid=0, jobs=1): err= 0: pid=1931413: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=647, BW=162MiB/s (170MB/s)(1629MiB/10063msec) 00:22:55.889 slat (usec): min=8, max=124251, avg=1204.60, stdev=4379.14 00:22:55.889 clat (usec): min=1737, max=254026, avg=97575.31, stdev=39496.09 00:22:55.889 lat (usec): min=1754, max=254257, avg=98779.90, stdev=40078.92 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 66], 00:22:55.889 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 100], 60.00th=[ 107], 00:22:55.889 | 70.00th=[ 116], 80.00th=[ 127], 90.00th=[ 148], 95.00th=[ 167], 00:22:55.889 | 99.00th=[ 194], 99.50th=[ 209], 99.90th=[ 241], 99.95th=[ 243], 00:22:55.889 | 99.99th=[ 255] 00:22:55.889 bw ( KiB/s): min=108544, max=248320, per=8.91%, avg=165153.05, stdev=37643.47, samples=20 00:22:55.889 iops : min= 424, max= 970, avg=645.10, stdev=147.03, samples=20 00:22:55.889 lat (msec) : 2=0.06%, 4=0.35%, 10=0.75%, 20=1.90%, 50=9.32% 00:22:55.889 lat (msec) : 100=39.11%, 250=48.49%, 500=0.02% 00:22:55.889 cpu : usr=0.30%, sys=2.29%, ctx=1564, majf=0, minf=3722 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=6515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job4: (groupid=0, jobs=1): err= 0: pid=1931414: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=663, BW=166MiB/s (174MB/s)(1672MiB/10072msec) 00:22:55.889 slat (usec): min=9, max=62048, avg=1128.10, stdev=4060.02 00:22:55.889 clat (usec): min=837, max=240497, avg=95197.08, stdev=50256.42 00:22:55.889 lat (usec): min=891, max=288712, avg=96325.19, stdev=50866.13 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 47], 00:22:55.889 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 90], 60.00th=[ 110], 00:22:55.889 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 167], 95.00th=[ 178], 00:22:55.889 | 99.00th=[ 215], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 236], 00:22:55.889 | 99.99th=[ 241] 00:22:55.889 bw ( KiB/s): min=91136, max=385536, per=9.15%, avg=169554.40, stdev=80622.34, samples=20 00:22:55.889 iops : min= 356, max= 1506, avg=662.25, stdev=314.91, samples=20 00:22:55.889 lat (usec) : 1000=0.03% 00:22:55.889 lat (msec) : 2=0.07%, 4=0.27%, 10=1.18%, 20=1.03%, 50=20.71% 00:22:55.889 lat (msec) : 100=31.34%, 250=45.36% 00:22:55.889 cpu : usr=0.36%, sys=2.30%, ctx=1539, majf=0, minf=4097 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=6687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job5: (groupid=0, jobs=1): err= 0: pid=1931415: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=583, BW=146MiB/s (153MB/s)(1470MiB/10081msec) 00:22:55.889 slat (usec): min=11, max=91044, avg=1473.08, stdev=4766.35 00:22:55.889 clat (msec): min=4, max=265, avg=108.15, stdev=44.41 00:22:55.889 lat (msec): min=4, max=265, avg=109.62, stdev=45.21 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 53], 20.00th=[ 73], 00:22:55.889 | 30.00th=[ 87], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 114], 00:22:55.889 | 70.00th=[ 130], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 186], 00:22:55.889 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 236], 99.95th=[ 236], 00:22:55.889 | 99.99th=[ 266] 00:22:55.889 bw ( KiB/s): min=84992, max=263168, per=8.03%, avg=148926.75, stdev=50229.95, samples=20 00:22:55.889 iops : min= 332, max= 1028, avg=581.70, stdev=196.22, samples=20 00:22:55.889 lat (msec) : 10=0.24%, 20=1.56%, 50=7.62%, 100=36.03%, 250=54.51% 00:22:55.889 lat (msec) : 500=0.03% 00:22:55.889 cpu : usr=0.41%, sys=2.07%, ctx=1364, majf=0, minf=4097 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=5881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job6: (groupid=0, jobs=1): err= 0: pid=1931416: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=664, BW=166MiB/s (174MB/s)(1673MiB/10076msec) 00:22:55.889 slat (usec): min=8, max=126967, avg=807.73, stdev=4115.44 00:22:55.889 clat (msec): min=2, max=241, avg=95.48, stdev=49.51 00:22:55.889 lat (msec): min=2, max=269, avg=96.28, stdev=50.05 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 51], 00:22:55.889 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 104], 00:22:55.889 | 70.00th=[ 118], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 186], 00:22:55.889 | 99.00th=[ 215], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 230], 00:22:55.889 | 99.99th=[ 243] 00:22:55.889 bw ( KiB/s): min=79360, max=270848, per=9.15%, avg=169683.50, stdev=42837.64, samples=20 00:22:55.889 iops : min= 310, max= 1058, avg=662.80, stdev=167.32, samples=20 00:22:55.889 lat (msec) : 4=0.39%, 10=2.05%, 20=3.30%, 50=13.82%, 100=37.58% 00:22:55.889 lat (msec) : 250=42.86% 00:22:55.889 cpu : usr=0.37%, sys=1.93%, ctx=1821, majf=0, minf=4097 00:22:55.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.889 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.889 job7: (groupid=0, jobs=1): err= 0: pid=1931417: Wed Apr 24 05:18:31 2024 00:22:55.889 read: IOPS=599, BW=150MiB/s (157MB/s)(1507MiB/10058msec) 00:22:55.889 slat (usec): min=9, max=79752, avg=1091.70, stdev=4059.69 00:22:55.889 clat (usec): min=1250, max=239286, avg=105634.12, stdev=44658.37 00:22:55.889 lat (usec): min=1321, max=242721, avg=106725.82, stdev=45270.03 00:22:55.889 clat percentiles (msec): 00:22:55.889 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 49], 20.00th=[ 65], 00:22:55.889 | 30.00th=[ 80], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 117], 00:22:55.889 | 70.00th=[ 129], 80.00th=[ 142], 90.00th=[ 167], 95.00th=[ 182], 00:22:55.889 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 236], 99.95th=[ 236], 00:22:55.889 | 99.99th=[ 241] 00:22:55.889 bw ( KiB/s): min=80384, max=272384, per=8.23%, avg=152651.80, stdev=50970.34, samples=20 00:22:55.889 iops : min= 314, max= 1064, avg=596.25, stdev=199.00, samples=20 00:22:55.889 lat (msec) : 2=0.15%, 4=0.35%, 10=0.58%, 20=1.76%, 50=7.98% 00:22:55.889 lat (msec) : 100=32.29%, 250=56.89% 00:22:55.890 cpu : usr=0.38%, sys=1.89%, ctx=1556, majf=0, minf=4097 00:22:55.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:55.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.890 issued rwts: total=6027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.890 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.890 job8: (groupid=0, jobs=1): err= 0: pid=1931420: Wed Apr 24 05:18:31 2024 00:22:55.890 read: IOPS=602, BW=151MiB/s (158MB/s)(1517MiB/10067msec) 00:22:55.890 slat (usec): min=9, max=127110, avg=1158.90, stdev=4800.38 00:22:55.890 clat (usec): min=1011, max=265523, avg=104920.27, stdev=44022.36 00:22:55.890 lat (usec): min=1039, max=278101, avg=106079.17, stdev=44645.13 00:22:55.890 clat percentiles (msec): 00:22:55.890 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 43], 20.00th=[ 72], 00:22:55.890 | 30.00th=[ 84], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 116], 00:22:55.890 | 70.00th=[ 126], 80.00th=[ 138], 90.00th=[ 163], 95.00th=[ 182], 00:22:55.890 | 99.00th=[ 218], 99.50th=[ 239], 99.90th=[ 262], 99.95th=[ 264], 00:22:55.890 | 99.99th=[ 266] 00:22:55.890 bw ( KiB/s): min=78336, max=286208, per=8.29%, avg=153735.75, stdev=49502.85, samples=20 00:22:55.890 iops : min= 306, max= 1118, avg=600.50, stdev=193.35, samples=20 00:22:55.890 lat (msec) : 2=0.05%, 4=0.02%, 10=1.68%, 20=1.83%, 50=7.60% 00:22:55.890 lat (msec) : 100=33.17%, 250=55.46%, 500=0.20% 00:22:55.890 cpu : usr=0.36%, sys=2.01%, ctx=1590, majf=0, minf=4097 00:22:55.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:55.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.890 issued rwts: total=6069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.890 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.890 job9: (groupid=0, jobs=1): err= 0: pid=1931421: Wed Apr 24 05:18:31 2024 00:22:55.890 read: IOPS=810, BW=203MiB/s (213MB/s)(2044MiB/10082msec) 00:22:55.890 slat (usec): min=13, max=45380, avg=1147.91, stdev=3293.29 00:22:55.890 clat (msec): min=12, max=201, avg=77.72, stdev=36.73 00:22:55.890 lat (msec): min=14, max=206, avg=78.87, stdev=37.21 00:22:55.890 clat percentiles (msec): 00:22:55.890 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 43], 00:22:55.890 | 30.00th=[ 53], 40.00th=[ 64], 50.00th=[ 75], 60.00th=[ 85], 00:22:55.890 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 128], 95.00th=[ 146], 00:22:55.890 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 199], 00:22:55.890 | 99.99th=[ 203] 00:22:55.890 bw ( KiB/s): min=97280, max=411648, per=11.20%, avg=207628.85, stdev=85329.24, samples=20 00:22:55.890 iops : min= 380, max= 1608, avg=811.05, stdev=333.32, samples=20 00:22:55.890 lat (msec) : 20=0.28%, 50=26.80%, 100=47.74%, 250=25.18% 00:22:55.890 cpu : usr=0.55%, sys=2.87%, ctx=1678, majf=0, minf=4097 00:22:55.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:55.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.890 issued rwts: total=8174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.890 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.890 job10: (groupid=0, jobs=1): err= 0: pid=1931422: Wed Apr 24 05:18:31 2024 00:22:55.890 read: IOPS=581, BW=145MiB/s (152MB/s)(1460MiB/10042msec) 00:22:55.890 slat (usec): min=12, max=64890, avg=1581.67, stdev=4478.05 00:22:55.890 clat (msec): min=18, max=245, avg=108.42, stdev=37.43 00:22:55.890 lat (msec): min=18, max=255, avg=110.00, stdev=38.12 00:22:55.890 clat percentiles (msec): 00:22:55.890 | 1.00th=[ 35], 5.00th=[ 49], 10.00th=[ 66], 20.00th=[ 81], 00:22:55.890 | 30.00th=[ 88], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 112], 00:22:55.890 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 180], 00:22:55.890 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 245], 00:22:55.890 | 99.99th=[ 245] 00:22:55.890 bw ( KiB/s): min=82944, max=232448, per=7.97%, avg=147837.40, stdev=41293.99, samples=20 00:22:55.890 iops : min= 324, max= 908, avg=577.45, stdev=161.29, samples=20 00:22:55.890 lat (msec) : 20=0.02%, 50=5.57%, 100=39.71%, 250=54.71% 00:22:55.890 cpu : usr=0.37%, sys=2.18%, ctx=1272, majf=0, minf=4097 00:22:55.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:55.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.890 issued rwts: total=5838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.890 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.890 00:22:55.890 Run status group 0 (all jobs): 00:22:55.890 READ: bw=1811MiB/s (1898MB/s), 145MiB/s-203MiB/s (152MB/s-213MB/s), io=17.8GiB (19.1GB), run=10042-10082msec 00:22:55.890 00:22:55.890 Disk stats (read/write): 00:22:55.890 nvme0n1: ios=14167/0, merge=0/0, ticks=1237861/0, in_queue=1237861, util=96.91% 00:22:55.890 nvme10n1: ios=13752/0, merge=0/0, ticks=1241245/0, in_queue=1241245, util=97.14% 00:22:55.890 nvme1n1: ios=13452/0, merge=0/0, ticks=1239397/0, in_queue=1239397, util=97.42% 00:22:55.890 nvme2n1: ios=12785/0, merge=0/0, ticks=1227174/0, in_queue=1227174, util=97.60% 00:22:55.890 nvme3n1: ios=13088/0, merge=0/0, ticks=1232388/0, in_queue=1232388, util=97.67% 00:22:55.890 nvme4n1: ios=11464/0, merge=0/0, ticks=1232472/0, in_queue=1232472, util=98.05% 00:22:55.890 nvme5n1: ios=13129/0, merge=0/0, ticks=1240118/0, in_queue=1240118, util=98.24% 00:22:55.890 nvme6n1: ios=11813/0, merge=0/0, ticks=1230215/0, in_queue=1230215, util=98.36% 00:22:55.890 nvme7n1: ios=11877/0, merge=0/0, ticks=1230580/0, in_queue=1230580, util=98.83% 00:22:55.890 nvme8n1: ios=16083/0, merge=0/0, ticks=1230154/0, in_queue=1230154, util=99.08% 00:22:55.890 nvme9n1: ios=11445/0, merge=0/0, ticks=1230403/0, in_queue=1230403, util=99.23% 00:22:55.890 05:18:31 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:55.890 [global] 00:22:55.890 thread=1 00:22:55.890 invalidate=1 00:22:55.890 rw=randwrite 00:22:55.890 time_based=1 00:22:55.890 runtime=10 00:22:55.890 ioengine=libaio 00:22:55.890 direct=1 00:22:55.890 bs=262144 00:22:55.890 iodepth=64 00:22:55.890 norandommap=1 00:22:55.890 numjobs=1 00:22:55.890 00:22:55.890 [job0] 00:22:55.890 filename=/dev/nvme0n1 00:22:55.890 [job1] 00:22:55.890 filename=/dev/nvme10n1 00:22:55.890 [job2] 00:22:55.890 filename=/dev/nvme1n1 00:22:55.890 [job3] 00:22:55.890 filename=/dev/nvme2n1 00:22:55.890 [job4] 00:22:55.890 filename=/dev/nvme3n1 00:22:55.890 [job5] 00:22:55.890 filename=/dev/nvme4n1 00:22:55.890 [job6] 00:22:55.890 filename=/dev/nvme5n1 00:22:55.890 [job7] 00:22:55.890 filename=/dev/nvme6n1 00:22:55.890 [job8] 00:22:55.890 filename=/dev/nvme7n1 00:22:55.890 [job9] 00:22:55.890 filename=/dev/nvme8n1 00:22:55.890 [job10] 00:22:55.890 filename=/dev/nvme9n1 00:22:55.890 Could not set queue depth (nvme0n1) 00:22:55.890 Could not set queue depth (nvme10n1) 00:22:55.890 Could not set queue depth (nvme1n1) 00:22:55.890 Could not set queue depth (nvme2n1) 00:22:55.890 Could not set queue depth (nvme3n1) 00:22:55.890 Could not set queue depth (nvme4n1) 00:22:55.890 Could not set queue depth (nvme5n1) 00:22:55.890 Could not set queue depth (nvme6n1) 00:22:55.890 Could not set queue depth (nvme7n1) 00:22:55.890 Could not set queue depth (nvme8n1) 00:22:55.890 Could not set queue depth (nvme9n1) 00:22:55.890 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.890 fio-3.35 00:22:55.890 Starting 11 threads 00:23:05.885 00:23:05.885 job0: (groupid=0, jobs=1): err= 0: pid=1932587: Wed Apr 24 05:18:42 2024 00:23:05.885 write: IOPS=836, BW=209MiB/s (219MB/s)(2111MiB/10101msec); 0 zone resets 00:23:05.885 slat (usec): min=15, max=65759, avg=897.30, stdev=2431.75 00:23:05.885 clat (usec): min=1149, max=383666, avg=75584.43, stdev=53641.73 00:23:05.885 lat (usec): min=1254, max=383699, avg=76481.73, stdev=54124.51 00:23:05.885 clat percentiles (msec): 00:23:05.885 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 21], 20.00th=[ 43], 00:23:05.885 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 64], 00:23:05.885 | 70.00th=[ 105], 80.00th=[ 124], 90.00th=[ 150], 95.00th=[ 161], 00:23:05.885 | 99.00th=[ 255], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 372], 00:23:05.885 | 99.99th=[ 384] 00:23:05.885 bw ( KiB/s): min=100352, max=396800, per=15.64%, avg=214551.55, stdev=96177.19, samples=20 00:23:05.885 iops : min= 392, max= 1550, avg=838.00, stdev=375.77, samples=20 00:23:05.885 lat (msec) : 2=0.08%, 4=0.65%, 10=4.94%, 20=4.10%, 50=40.54% 00:23:05.885 lat (msec) : 100=18.07%, 250=30.57%, 500=1.04% 00:23:05.885 cpu : usr=2.10%, sys=2.49%, ctx=4082, majf=0, minf=1 00:23:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.885 issued rwts: total=0,8445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.885 job1: (groupid=0, jobs=1): err= 0: pid=1932588: Wed Apr 24 05:18:42 2024 00:23:05.885 write: IOPS=326, BW=81.7MiB/s (85.7MB/s)(835MiB/10217msec); 0 zone resets 00:23:05.885 slat (usec): min=26, max=114219, avg=2882.59, stdev=6198.25 00:23:05.885 clat (msec): min=4, max=520, avg=192.70, stdev=75.69 00:23:05.885 lat (msec): min=4, max=520, avg=195.59, stdev=76.63 00:23:05.885 clat percentiles (msec): 00:23:05.885 | 1.00th=[ 37], 5.00th=[ 80], 10.00th=[ 93], 20.00th=[ 142], 00:23:05.885 | 30.00th=[ 161], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 192], 00:23:05.885 | 70.00th=[ 213], 80.00th=[ 241], 90.00th=[ 300], 95.00th=[ 347], 00:23:05.885 | 99.00th=[ 384], 99.50th=[ 447], 99.90th=[ 506], 99.95th=[ 523], 00:23:05.885 | 99.99th=[ 523] 00:23:05.885 bw ( KiB/s): min=49152, max=144896, per=6.11%, avg=83872.55, stdev=27484.85, samples=20 00:23:05.885 iops : min= 192, max= 566, avg=327.60, stdev=107.35, samples=20 00:23:05.885 lat (msec) : 10=0.33%, 20=0.24%, 50=1.35%, 100=8.77%, 250=70.39% 00:23:05.885 lat (msec) : 500=18.74%, 750=0.18% 00:23:05.885 cpu : usr=1.01%, sys=1.08%, ctx=1029, majf=0, minf=1 00:23:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:23:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.885 issued rwts: total=0,3340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.885 job2: (groupid=0, jobs=1): err= 0: pid=1932589: Wed Apr 24 05:18:42 2024 00:23:05.885 write: IOPS=327, BW=81.8MiB/s (85.8MB/s)(827MiB/10102msec); 0 zone resets 00:23:05.885 slat (usec): min=15, max=47515, avg=2892.52, stdev=5760.85 00:23:05.885 clat (msec): min=2, max=370, avg=192.60, stdev=66.43 00:23:05.885 lat (msec): min=2, max=370, avg=195.49, stdev=67.37 00:23:05.885 clat percentiles (msec): 00:23:05.885 | 1.00th=[ 15], 5.00th=[ 82], 10.00th=[ 128], 20.00th=[ 148], 00:23:05.885 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 197], 00:23:05.886 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 279], 95.00th=[ 326], 00:23:05.886 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 372], 00:23:05.886 | 99.99th=[ 372] 00:23:05.886 bw ( KiB/s): min=47104, max=125952, per=6.05%, avg=83001.05, stdev=22540.76, samples=20 00:23:05.886 iops : min= 184, max= 492, avg=324.20, stdev=88.04, samples=20 00:23:05.886 lat (msec) : 4=0.15%, 10=0.54%, 20=0.85%, 50=1.36%, 100=4.60% 00:23:05.886 lat (msec) : 250=74.80%, 500=17.70% 00:23:05.886 cpu : usr=0.93%, sys=1.08%, ctx=1089, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,3306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job3: (groupid=0, jobs=1): err= 0: pid=1932590: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=692, BW=173MiB/s (182MB/s)(1741MiB/10051msec); 0 zone resets 00:23:05.886 slat (usec): min=20, max=36975, avg=1178.75, stdev=2825.14 00:23:05.886 clat (msec): min=3, max=244, avg=91.18, stdev=53.16 00:23:05.886 lat (msec): min=3, max=244, avg=92.35, stdev=53.85 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 34], 20.00th=[ 45], 00:23:05.886 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 82], 60.00th=[ 113], 00:23:05.886 | 70.00th=[ 128], 80.00th=[ 150], 90.00th=[ 165], 95.00th=[ 176], 00:23:05.886 | 99.00th=[ 197], 99.50th=[ 209], 99.90th=[ 230], 99.95th=[ 241], 00:23:05.886 | 99.99th=[ 245] 00:23:05.886 bw ( KiB/s): min=90112, max=366592, per=12.87%, avg=176573.90, stdev=94405.02, samples=20 00:23:05.886 iops : min= 352, max= 1432, avg=689.70, stdev=368.73, samples=20 00:23:05.886 lat (msec) : 4=0.07%, 10=1.81%, 20=3.76%, 50=33.09%, 100=15.27% 00:23:05.886 lat (msec) : 250=45.99% 00:23:05.886 cpu : usr=1.92%, sys=2.27%, ctx=3100, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,6962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job4: (groupid=0, jobs=1): err= 0: pid=1932591: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=512, BW=128MiB/s (134MB/s)(1294MiB/10100msec); 0 zone resets 00:23:05.886 slat (usec): min=17, max=48242, avg=1289.17, stdev=3294.79 00:23:05.886 clat (usec): min=1343, max=420695, avg=123518.07, stdev=59530.46 00:23:05.886 lat (usec): min=1389, max=425025, avg=124807.24, stdev=60142.37 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 72], 00:23:05.886 | 30.00th=[ 101], 40.00th=[ 116], 50.00th=[ 130], 60.00th=[ 138], 00:23:05.886 | 70.00th=[ 150], 80.00th=[ 163], 90.00th=[ 178], 95.00th=[ 207], 00:23:05.886 | 99.00th=[ 321], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 418], 00:23:05.886 | 99.99th=[ 422] 00:23:05.886 bw ( KiB/s): min=90805, max=204800, per=9.54%, avg=130903.20, stdev=26942.14, samples=20 00:23:05.886 iops : min= 354, max= 800, avg=511.30, stdev=105.30, samples=20 00:23:05.886 lat (msec) : 2=0.06%, 4=0.37%, 10=1.02%, 20=2.51%, 50=8.02% 00:23:05.886 lat (msec) : 100=18.14%, 250=67.12%, 500=2.76% 00:23:05.886 cpu : usr=1.52%, sys=1.77%, ctx=2761, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,5177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job5: (groupid=0, jobs=1): err= 0: pid=1932592: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=425, BW=106MiB/s (111MB/s)(1087MiB/10219msec); 0 zone resets 00:23:05.886 slat (usec): min=16, max=68813, avg=1778.37, stdev=4813.12 00:23:05.886 clat (usec): min=1111, max=408287, avg=148610.00, stdev=96093.74 00:23:05.886 lat (usec): min=1140, max=408373, avg=150388.37, stdev=97191.21 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 12], 20.00th=[ 33], 00:23:05.886 | 30.00th=[ 94], 40.00th=[ 124], 50.00th=[ 157], 60.00th=[ 180], 00:23:05.886 | 70.00th=[ 201], 80.00th=[ 228], 90.00th=[ 275], 95.00th=[ 309], 00:23:05.886 | 99.00th=[ 384], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 409], 00:23:05.886 | 99.99th=[ 409] 00:23:05.886 bw ( KiB/s): min=47104, max=205312, per=7.99%, avg=109635.40, stdev=48306.30, samples=20 00:23:05.886 iops : min= 184, max= 802, avg=428.20, stdev=188.68, samples=20 00:23:05.886 lat (msec) : 2=0.41%, 4=2.65%, 10=5.48%, 20=7.82%, 50=5.29% 00:23:05.886 lat (msec) : 100=11.00%, 250=52.76%, 500=14.59% 00:23:05.886 cpu : usr=1.25%, sys=1.32%, ctx=2381, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,4346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job6: (groupid=0, jobs=1): err= 0: pid=1932593: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=404, BW=101MiB/s (106MB/s)(1033MiB/10229msec); 0 zone resets 00:23:05.886 slat (usec): min=15, max=71788, avg=2216.35, stdev=5179.22 00:23:05.886 clat (msec): min=7, max=545, avg=156.09, stdev=88.36 00:23:05.886 lat (msec): min=7, max=545, avg=158.31, stdev=89.56 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 27], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 75], 00:23:05.886 | 30.00th=[ 115], 40.00th=[ 123], 50.00th=[ 142], 60.00th=[ 155], 00:23:05.886 | 70.00th=[ 182], 80.00th=[ 247], 90.00th=[ 292], 95.00th=[ 317], 00:23:05.886 | 99.00th=[ 359], 99.50th=[ 456], 99.90th=[ 531], 99.95th=[ 531], 00:23:05.886 | 99.99th=[ 550] 00:23:05.886 bw ( KiB/s): min=47104, max=265197, per=7.59%, avg=104137.15, stdev=56460.58, samples=20 00:23:05.886 iops : min= 184, max= 1035, avg=406.70, stdev=220.41, samples=20 00:23:05.886 lat (msec) : 10=0.02%, 20=0.63%, 50=14.71%, 100=8.52%, 250=56.64% 00:23:05.886 lat (msec) : 500=19.24%, 750=0.24% 00:23:05.886 cpu : usr=1.27%, sys=1.33%, ctx=1480, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,4133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job7: (groupid=0, jobs=1): err= 0: pid=1932594: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=475, BW=119MiB/s (125MB/s)(1216MiB/10220msec); 0 zone resets 00:23:05.886 slat (usec): min=19, max=84891, avg=1267.50, stdev=4281.76 00:23:05.886 clat (usec): min=1492, max=451563, avg=133172.95, stdev=92741.89 00:23:05.886 lat (usec): min=1546, max=451624, avg=134440.45, stdev=93875.97 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 44], 00:23:05.886 | 30.00th=[ 82], 40.00th=[ 111], 50.00th=[ 129], 60.00th=[ 142], 00:23:05.886 | 70.00th=[ 165], 80.00th=[ 188], 90.00th=[ 271], 95.00th=[ 330], 00:23:05.886 | 99.00th=[ 409], 99.50th=[ 422], 99.90th=[ 443], 99.95th=[ 447], 00:23:05.886 | 99.99th=[ 451] 00:23:05.886 bw ( KiB/s): min=45056, max=215040, per=8.95%, avg=122828.80, stdev=48788.52, samples=20 00:23:05.886 iops : min= 176, max= 840, avg=479.80, stdev=190.58, samples=20 00:23:05.886 lat (msec) : 2=0.04%, 4=0.62%, 10=4.11%, 20=6.77%, 50=10.16% 00:23:05.886 lat (msec) : 100=14.38%, 250=52.39%, 500=11.54% 00:23:05.886 cpu : usr=1.28%, sys=1.79%, ctx=3386, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job8: (groupid=0, jobs=1): err= 0: pid=1932607: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=324, BW=81.0MiB/s (85.0MB/s)(829MiB/10223msec); 0 zone resets 00:23:05.886 slat (usec): min=24, max=33693, avg=3004.31, stdev=5646.18 00:23:05.886 clat (msec): min=10, max=490, avg=194.31, stdev=64.72 00:23:05.886 lat (msec): min=10, max=490, avg=197.31, stdev=65.45 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 39], 5.00th=[ 90], 10.00th=[ 128], 20.00th=[ 144], 00:23:05.886 | 30.00th=[ 163], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 197], 00:23:05.886 | 70.00th=[ 222], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 317], 00:23:05.886 | 99.00th=[ 338], 99.50th=[ 414], 99.90th=[ 472], 99.95th=[ 489], 00:23:05.886 | 99.99th=[ 489] 00:23:05.886 bw ( KiB/s): min=51200, max=154624, per=6.07%, avg=83225.60, stdev=25019.70, samples=20 00:23:05.886 iops : min= 200, max= 604, avg=325.10, stdev=97.73, samples=20 00:23:05.886 lat (msec) : 20=0.48%, 50=0.75%, 100=5.28%, 250=74.23%, 500=19.25% 00:23:05.886 cpu : usr=0.92%, sys=1.09%, ctx=892, majf=0, minf=1 00:23:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:23:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.886 issued rwts: total=0,3314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.886 job9: (groupid=0, jobs=1): err= 0: pid=1932613: Wed Apr 24 05:18:42 2024 00:23:05.886 write: IOPS=494, BW=124MiB/s (130MB/s)(1264MiB/10226msec); 0 zone resets 00:23:05.886 slat (usec): min=15, max=132859, avg=1357.58, stdev=4808.04 00:23:05.886 clat (usec): min=1029, max=544703, avg=128051.65, stdev=90995.22 00:23:05.886 lat (usec): min=1100, max=569332, avg=129409.23, stdev=92046.28 00:23:05.886 clat percentiles (msec): 00:23:05.886 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 36], 00:23:05.886 | 30.00th=[ 75], 40.00th=[ 113], 50.00th=[ 136], 60.00th=[ 155], 00:23:05.886 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 213], 95.00th=[ 330], 00:23:05.886 | 99.00th=[ 414], 99.50th=[ 430], 99.90th=[ 531], 99.95th=[ 542], 00:23:05.886 | 99.99th=[ 542] 00:23:05.886 bw ( KiB/s): min=37376, max=356864, per=9.31%, avg=127755.05, stdev=66616.96, samples=20 00:23:05.886 iops : min= 146, max= 1394, avg=499.00, stdev=260.20, samples=20 00:23:05.886 lat (msec) : 2=0.61%, 4=1.80%, 10=6.05%, 20=5.76%, 50=12.17% 00:23:05.886 lat (msec) : 100=10.39%, 250=55.84%, 500=7.22%, 750=0.16% 00:23:05.887 cpu : usr=1.31%, sys=1.69%, ctx=3191, majf=0, minf=1 00:23:05.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.887 issued rwts: total=0,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.887 job10: (groupid=0, jobs=1): err= 0: pid=1932614: Wed Apr 24 05:18:42 2024 00:23:05.887 write: IOPS=576, BW=144MiB/s (151MB/s)(1468MiB/10185msec); 0 zone resets 00:23:05.887 slat (usec): min=14, max=142800, avg=1304.81, stdev=3850.67 00:23:05.887 clat (usec): min=1152, max=450085, avg=109649.59, stdev=73734.07 00:23:05.887 lat (usec): min=1170, max=452726, avg=110954.39, stdev=74476.16 00:23:05.887 clat percentiles (msec): 00:23:05.887 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 16], 20.00th=[ 36], 00:23:05.887 | 30.00th=[ 60], 40.00th=[ 97], 50.00th=[ 113], 60.00th=[ 129], 00:23:05.887 | 70.00th=[ 140], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 255], 00:23:05.887 | 99.00th=[ 338], 99.50th=[ 401], 99.90th=[ 443], 99.95th=[ 447], 00:23:05.887 | 99.99th=[ 451] 00:23:05.887 bw ( KiB/s): min=85504, max=375808, per=10.84%, avg=148676.25, stdev=68923.78, samples=20 00:23:05.887 iops : min= 334, max= 1468, avg=580.75, stdev=269.24, samples=20 00:23:05.887 lat (msec) : 2=0.15%, 4=0.90%, 10=4.50%, 20=6.76%, 50=12.59% 00:23:05.887 lat (msec) : 100=16.45%, 250=53.36%, 500=5.28% 00:23:05.887 cpu : usr=1.81%, sys=1.94%, ctx=3198, majf=0, minf=1 00:23:05.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:05.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.887 issued rwts: total=0,5871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.887 00:23:05.887 Run status group 0 (all jobs): 00:23:05.887 WRITE: bw=1340MiB/s (1405MB/s), 81.0MiB/s-209MiB/s (85.0MB/s-219MB/s), io=13.4GiB (14.4GB), run=10051-10229msec 00:23:05.887 00:23:05.887 Disk stats (read/write): 00:23:05.887 nvme0n1: ios=47/16695, merge=0/0, ticks=1087/1213003, in_queue=1214090, util=98.64% 00:23:05.887 nvme10n1: ios=51/6649, merge=0/0, ticks=2067/1224082, in_queue=1226149, util=99.11% 00:23:05.887 nvme1n1: ios=49/6395, merge=0/0, ticks=46/1205146, in_queue=1205192, util=97.75% 00:23:05.887 nvme2n1: ios=50/13689, merge=0/0, ticks=84/1216868, in_queue=1216952, util=98.11% 00:23:05.887 nvme3n1: ios=50/10159, merge=0/0, ticks=40/1221474, in_queue=1221514, util=97.95% 00:23:05.887 nvme4n1: ios=49/8659, merge=0/0, ticks=77/1238818, in_queue=1238895, util=98.73% 00:23:05.887 nvme5n1: ios=50/8221, merge=0/0, ticks=295/1231203, in_queue=1231498, util=98.97% 00:23:05.887 nvme6n1: ios=55/9684, merge=0/0, ticks=592/1245103, in_queue=1245695, util=100.00% 00:23:05.887 nvme7n1: ios=46/6585, merge=0/0, ticks=681/1228808, in_queue=1229489, util=100.00% 00:23:05.887 nvme8n1: ios=0/10068, merge=0/0, ticks=0/1244397, in_queue=1244397, util=99.02% 00:23:05.887 nvme9n1: ios=0/11733, merge=0/0, ticks=0/1241951, in_queue=1241951, util=99.14% 00:23:05.887 05:18:42 -- target/multiconnection.sh@36 -- # sync 00:23:05.887 05:18:42 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:05.887 05:18:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.887 05:18:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:05.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.887 05:18:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:05.887 05:18:42 -- common/autotest_common.sh@1205 -- # local i=0 00:23:05.887 05:18:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:05.887 05:18:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:05.887 05:18:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:05.887 05:18:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:23:05.887 05:18:42 -- common/autotest_common.sh@1217 -- # return 0 00:23:05.887 05:18:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.887 05:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.887 05:18:42 -- common/autotest_common.sh@10 -- # set +x 00:23:05.887 05:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.887 05:18:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.887 05:18:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:05.887 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:05.887 05:18:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:05.887 05:18:42 -- common/autotest_common.sh@1205 -- # local i=0 00:23:05.887 05:18:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:05.887 05:18:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:05.887 05:18:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:05.887 05:18:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:23:05.887 05:18:42 -- common/autotest_common.sh@1217 -- # return 0 00:23:05.887 05:18:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:05.887 05:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.887 05:18:42 -- common/autotest_common.sh@10 -- # set +x 00:23:05.887 05:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.887 05:18:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.887 05:18:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:05.887 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:05.887 05:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:05.887 05:18:43 -- common/autotest_common.sh@1205 -- # local i=0 00:23:05.887 05:18:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:05.887 05:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:05.887 05:18:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:05.887 05:18:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:23:05.887 05:18:43 -- common/autotest_common.sh@1217 -- # return 0 00:23:05.887 05:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:05.887 05:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.887 05:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:05.887 05:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.887 05:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.887 05:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:06.146 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:06.146 05:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:06.146 05:18:43 -- common/autotest_common.sh@1205 -- # local i=0 00:23:06.146 05:18:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:06.146 05:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:06.146 05:18:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:06.146 05:18:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:23:06.146 05:18:43 -- common/autotest_common.sh@1217 -- # return 0 00:23:06.146 05:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:06.146 05:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.146 05:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.146 05:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.146 05:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.146 05:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:06.405 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:06.405 05:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:06.405 05:18:43 -- common/autotest_common.sh@1205 -- # local i=0 00:23:06.405 05:18:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:06.405 05:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:06.405 05:18:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:06.405 05:18:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:23:06.405 05:18:43 -- common/autotest_common.sh@1217 -- # return 0 00:23:06.405 05:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:06.405 05:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.405 05:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.405 05:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.405 05:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.405 05:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:06.664 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:06.664 05:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:06.664 05:18:43 -- common/autotest_common.sh@1205 -- # local i=0 00:23:06.664 05:18:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:06.664 05:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:06.664 05:18:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:06.664 05:18:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:23:06.664 05:18:43 -- common/autotest_common.sh@1217 -- # return 0 00:23:06.664 05:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:06.664 05:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.664 05:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.664 05:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.664 05:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.664 05:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:06.664 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:06.664 05:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:06.664 05:18:43 -- common/autotest_common.sh@1205 -- # local i=0 00:23:06.664 05:18:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:06.664 05:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:06.664 05:18:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:06.664 05:18:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:23:06.664 05:18:43 -- common/autotest_common.sh@1217 -- # return 0 00:23:06.664 05:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:06.664 05:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.664 05:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.664 05:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.664 05:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.664 05:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:06.923 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:06.923 05:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:06.923 05:18:44 -- common/autotest_common.sh@1205 -- # local i=0 00:23:06.923 05:18:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:06.923 05:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:06.923 05:18:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:06.923 05:18:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:23:06.923 05:18:44 -- common/autotest_common.sh@1217 -- # return 0 00:23:06.923 05:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:06.923 05:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.923 05:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:06.923 05:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.923 05:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.923 05:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:07.184 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:07.184 05:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:07.184 05:18:44 -- common/autotest_common.sh@1205 -- # local i=0 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:23:07.184 05:18:44 -- common/autotest_common.sh@1217 -- # return 0 00:23:07.184 05:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:07.184 05:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.184 05:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.184 05:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.184 05:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.184 05:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:07.184 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:07.184 05:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:07.184 05:18:44 -- common/autotest_common.sh@1205 -- # local i=0 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:23:07.184 05:18:44 -- common/autotest_common.sh@1217 -- # return 0 00:23:07.184 05:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:07.184 05:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.184 05:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.184 05:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.184 05:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.184 05:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:07.184 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:07.184 05:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:07.184 05:18:44 -- common/autotest_common.sh@1205 -- # local i=0 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:07.184 05:18:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:23:07.184 05:18:44 -- common/autotest_common.sh@1217 -- # return 0 00:23:07.184 05:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:07.184 05:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.184 05:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.184 05:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.184 05:18:44 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:07.184 05:18:44 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:07.184 05:18:44 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:07.184 05:18:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:07.184 05:18:44 -- nvmf/common.sh@117 -- # sync 00:23:07.184 05:18:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.184 05:18:44 -- nvmf/common.sh@120 -- # set +e 00:23:07.184 05:18:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.184 05:18:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.184 rmmod nvme_tcp 00:23:07.444 rmmod nvme_fabrics 00:23:07.444 rmmod nvme_keyring 00:23:07.444 05:18:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.444 05:18:44 -- nvmf/common.sh@124 -- # set -e 00:23:07.444 05:18:44 -- nvmf/common.sh@125 -- # return 0 00:23:07.444 05:18:44 -- nvmf/common.sh@478 -- # '[' -n 1926528 ']' 00:23:07.444 05:18:44 -- nvmf/common.sh@479 -- # killprocess 1926528 00:23:07.444 05:18:44 -- common/autotest_common.sh@936 -- # '[' -z 1926528 ']' 00:23:07.444 05:18:44 -- common/autotest_common.sh@940 -- # kill -0 1926528 00:23:07.444 05:18:44 -- common/autotest_common.sh@941 -- # uname 00:23:07.444 05:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.444 05:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1926528 00:23:07.444 05:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:07.444 05:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:07.444 05:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1926528' 00:23:07.444 killing process with pid 1926528 00:23:07.444 05:18:44 -- common/autotest_common.sh@955 -- # kill 1926528 00:23:07.444 05:18:44 -- common/autotest_common.sh@960 -- # wait 1926528 00:23:08.012 05:18:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:08.012 05:18:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:08.012 05:18:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:08.012 05:18:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.012 05:18:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.012 05:18:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.012 05:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.012 05:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.913 05:18:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.913 00:23:09.913 real 1m0.428s 00:23:09.913 user 3m23.072s 00:23:09.913 sys 0m23.711s 00:23:09.913 05:18:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.913 05:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.913 ************************************ 00:23:09.913 END TEST nvmf_multiconnection 00:23:09.913 ************************************ 00:23:09.913 05:18:47 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:09.913 05:18:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:09.913 05:18:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.913 05:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.172 ************************************ 00:23:10.172 START TEST nvmf_initiator_timeout 00:23:10.172 ************************************ 00:23:10.172 05:18:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:10.172 * Looking for test storage... 00:23:10.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:10.172 05:18:47 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.172 05:18:47 -- nvmf/common.sh@7 -- # uname -s 00:23:10.172 05:18:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.172 05:18:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.172 05:18:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.172 05:18:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.172 05:18:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.172 05:18:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.172 05:18:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.172 05:18:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.172 05:18:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.172 05:18:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.172 05:18:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.172 05:18:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.172 05:18:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.172 05:18:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.172 05:18:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.172 05:18:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.172 05:18:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.172 05:18:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.172 05:18:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.172 05:18:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.172 05:18:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.172 05:18:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.172 05:18:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.172 05:18:47 -- paths/export.sh@5 -- # export PATH 00:23:10.172 05:18:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.172 05:18:47 -- nvmf/common.sh@47 -- # : 0 00:23:10.172 05:18:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.172 05:18:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.172 05:18:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.172 05:18:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.172 05:18:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.172 05:18:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.172 05:18:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.172 05:18:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.172 05:18:47 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.172 05:18:47 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.172 05:18:47 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:10.172 05:18:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:10.172 05:18:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.172 05:18:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:10.172 05:18:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:10.172 05:18:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:10.172 05:18:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.172 05:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.172 05:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.172 05:18:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:10.172 05:18:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:10.172 05:18:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.172 05:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:12.078 05:18:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:12.078 05:18:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.078 05:18:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.078 05:18:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.078 05:18:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.078 05:18:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.078 05:18:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.078 05:18:49 -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.078 05:18:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.078 05:18:49 -- nvmf/common.sh@296 -- # e810=() 00:23:12.078 05:18:49 -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.078 05:18:49 -- nvmf/common.sh@297 -- # x722=() 00:23:12.078 05:18:49 -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.078 05:18:49 -- nvmf/common.sh@298 -- # mlx=() 00:23:12.078 05:18:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.078 05:18:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.078 05:18:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.078 05:18:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:12.078 05:18:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.078 05:18:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.078 05:18:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.078 05:18:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.078 05:18:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.078 05:18:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.078 05:18:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.078 05:18:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.078 05:18:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.078 05:18:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.078 05:18:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.078 05:18:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.078 05:18:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.078 05:18:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.078 05:18:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:12.078 05:18:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:12.078 05:18:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:12.078 05:18:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.078 05:18:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.078 05:18:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.078 05:18:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:12.078 05:18:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.078 05:18:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.078 05:18:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:12.078 05:18:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.078 05:18:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.078 05:18:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:12.078 05:18:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:12.078 05:18:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.078 05:18:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.078 05:18:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.078 05:18:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.078 05:18:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:12.078 05:18:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.078 05:18:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.078 05:18:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.078 05:18:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:12.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:23:12.078 00:23:12.078 --- 10.0.0.2 ping statistics --- 00:23:12.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.078 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:23:12.078 05:18:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:23:12.336 00:23:12.336 --- 10.0.0.1 ping statistics --- 00:23:12.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.336 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:12.336 05:18:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.336 05:18:49 -- nvmf/common.sh@411 -- # return 0 00:23:12.336 05:18:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:12.336 05:18:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.336 05:18:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:12.336 05:18:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:12.336 05:18:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.336 05:18:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:12.336 05:18:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:12.336 05:18:49 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:12.336 05:18:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:12.336 05:18:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:12.336 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.336 05:18:49 -- nvmf/common.sh@470 -- # nvmfpid=1935945 00:23:12.336 05:18:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.336 05:18:49 -- nvmf/common.sh@471 -- # waitforlisten 1935945 00:23:12.336 05:18:49 -- common/autotest_common.sh@817 -- # '[' -z 1935945 ']' 00:23:12.336 05:18:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.336 05:18:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:12.336 05:18:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.336 05:18:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:12.336 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.336 [2024-04-24 05:18:49.418706] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:23:12.336 [2024-04-24 05:18:49.418781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.336 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.336 [2024-04-24 05:18:49.459413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:12.336 [2024-04-24 05:18:49.485547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.336 [2024-04-24 05:18:49.569062] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.336 [2024-04-24 05:18:49.569117] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.336 [2024-04-24 05:18:49.569148] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.336 [2024-04-24 05:18:49.569160] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.336 [2024-04-24 05:18:49.569170] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.336 [2024-04-24 05:18:49.569302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.336 [2024-04-24 05:18:49.569325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.336 [2024-04-24 05:18:49.569384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.336 [2024-04-24 05:18:49.569387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.595 05:18:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:12.595 05:18:49 -- common/autotest_common.sh@850 -- # return 0 00:23:12.595 05:18:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:12.595 05:18:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 05:18:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 Malloc0 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 Delay0 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 [2024-04-24 05:18:49.734722] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.595 05:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.595 05:18:49 -- common/autotest_common.sh@10 -- # set +x 00:23:12.595 [2024-04-24 05:18:49.762972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.595 05:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.595 05:18:49 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:13.533 05:18:50 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:13.533 05:18:50 -- common/autotest_common.sh@1184 -- # local i=0 00:23:13.533 05:18:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:13.533 05:18:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:13.533 05:18:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:15.433 05:18:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:15.433 05:18:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:15.433 05:18:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:15.433 05:18:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:15.433 05:18:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:15.433 05:18:52 -- common/autotest_common.sh@1194 -- # return 0 00:23:15.433 05:18:52 -- target/initiator_timeout.sh@35 -- # fio_pid=1936370 00:23:15.433 05:18:52 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:15.433 05:18:52 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:15.433 [global] 00:23:15.433 thread=1 00:23:15.433 invalidate=1 00:23:15.433 rw=write 00:23:15.433 time_based=1 00:23:15.433 runtime=60 00:23:15.433 ioengine=libaio 00:23:15.433 direct=1 00:23:15.433 bs=4096 00:23:15.433 iodepth=1 00:23:15.433 norandommap=0 00:23:15.433 numjobs=1 00:23:15.433 00:23:15.433 verify_dump=1 00:23:15.433 verify_backlog=512 00:23:15.433 verify_state_save=0 00:23:15.433 do_verify=1 00:23:15.433 verify=crc32c-intel 00:23:15.433 [job0] 00:23:15.433 filename=/dev/nvme0n1 00:23:15.433 Could not set queue depth (nvme0n1) 00:23:15.433 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:15.433 fio-3.35 00:23:15.433 Starting 1 thread 00:23:18.720 05:18:55 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:18.720 05:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.720 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.720 true 00:23:18.720 05:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.720 05:18:55 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:18.720 05:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.720 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.720 true 00:23:18.720 05:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.720 05:18:55 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:18.720 05:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.720 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.720 true 00:23:18.720 05:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.720 05:18:55 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:18.720 05:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.720 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.720 true 00:23:18.720 05:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.720 05:18:55 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:21.254 05:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.254 05:18:58 -- common/autotest_common.sh@10 -- # set +x 00:23:21.254 true 00:23:21.254 05:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:21.254 05:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.254 05:18:58 -- common/autotest_common.sh@10 -- # set +x 00:23:21.254 true 00:23:21.254 05:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:21.254 05:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.254 05:18:58 -- common/autotest_common.sh@10 -- # set +x 00:23:21.254 true 00:23:21.254 05:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:21.254 05:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.254 05:18:58 -- common/autotest_common.sh@10 -- # set +x 00:23:21.254 true 00:23:21.254 05:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:21.254 05:18:58 -- target/initiator_timeout.sh@54 -- # wait 1936370 00:24:17.507 00:24:17.507 job0: (groupid=0, jobs=1): err= 0: pid=1936443: Wed Apr 24 05:19:52 2024 00:24:17.507 read: IOPS=83, BW=336KiB/s (344kB/s)(19.7MiB/60002msec) 00:24:17.507 slat (nsec): min=6185, max=29989, avg=8310.40, stdev=3026.69 00:24:17.507 clat (usec): min=283, max=40929k, avg=11641.79, stdev=576914.03 00:24:17.507 lat (usec): min=290, max=40929k, avg=11650.10, stdev=576914.16 00:24:17.507 clat percentiles (usec): 00:24:17.507 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 00:24:17.507 | 20.00th=[ 310], 30.00th=[ 314], 40.00th=[ 318], 00:24:17.507 | 50.00th=[ 322], 60.00th=[ 326], 70.00th=[ 334], 00:24:17.507 | 80.00th=[ 355], 90.00th=[ 515], 95.00th=[ 41157], 00:24:17.507 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:24:17.507 | 99.95th=[ 43779], 99.99th=[17112761] 00:24:17.507 write: IOPS=85, BW=341KiB/s (350kB/s)(20.0MiB/60002msec); 0 zone resets 00:24:17.507 slat (usec): min=6, max=25880, avg=14.73, stdev=361.56 00:24:17.507 clat (usec): min=202, max=481, avg=244.05, stdev=41.11 00:24:17.507 lat (usec): min=211, max=26362, avg=258.78, stdev=367.28 00:24:17.507 clat percentiles (usec): 00:24:17.507 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:24:17.507 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 235], 00:24:17.507 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 326], 95.00th=[ 363], 00:24:17.507 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 388], 99.95th=[ 412], 00:24:17.507 | 99.99th=[ 482] 00:24:17.507 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=5851.43, stdev=1421.96, samples=7 00:24:17.507 iops : min= 1024, max= 2048, avg=1462.86, stdev=355.49, samples=7 00:24:17.507 lat (usec) : 250=42.32%, 500=52.26%, 750=1.55% 00:24:17.507 lat (msec) : 50=3.87%, >=2000=0.01% 00:24:17.507 cpu : usr=0.11%, sys=0.19%, ctx=10157, majf=0, minf=2 00:24:17.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.507 issued rwts: total=5034,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:17.507 00:24:17.507 Run status group 0 (all jobs): 00:24:17.507 READ: bw=336KiB/s (344kB/s), 336KiB/s-336KiB/s (344kB/s-344kB/s), io=19.7MiB (20.6MB), run=60002-60002msec 00:24:17.507 WRITE: bw=341KiB/s (350kB/s), 341KiB/s-341KiB/s (350kB/s-350kB/s), io=20.0MiB (21.0MB), run=60002-60002msec 00:24:17.507 00:24:17.507 Disk stats (read/write): 00:24:17.507 nvme0n1: ios=5083/5120, merge=0/0, ticks=18860/1226, in_queue=20086, util=99.65% 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:17.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:17.507 05:19:52 -- common/autotest_common.sh@1205 -- # local i=0 00:24:17.507 05:19:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:17.507 05:19:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:17.507 05:19:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:17.507 05:19:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:17.507 05:19:52 -- common/autotest_common.sh@1217 -- # return 0 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:17.507 nvmf hotplug test: fio successful as expected 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.507 05:19:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.507 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:24:17.507 05:19:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:17.507 05:19:52 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:17.507 05:19:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:17.507 05:19:52 -- nvmf/common.sh@117 -- # sync 00:24:17.507 05:19:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.507 05:19:52 -- nvmf/common.sh@120 -- # set +e 00:24:17.507 05:19:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.507 05:19:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.507 rmmod nvme_tcp 00:24:17.507 rmmod nvme_fabrics 00:24:17.507 rmmod nvme_keyring 00:24:17.507 05:19:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.507 05:19:53 -- nvmf/common.sh@124 -- # set -e 00:24:17.508 05:19:53 -- nvmf/common.sh@125 -- # return 0 00:24:17.508 05:19:53 -- nvmf/common.sh@478 -- # '[' -n 1935945 ']' 00:24:17.508 05:19:53 -- nvmf/common.sh@479 -- # killprocess 1935945 00:24:17.508 05:19:53 -- common/autotest_common.sh@936 -- # '[' -z 1935945 ']' 00:24:17.508 05:19:53 -- common/autotest_common.sh@940 -- # kill -0 1935945 00:24:17.508 05:19:53 -- common/autotest_common.sh@941 -- # uname 00:24:17.508 05:19:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.508 05:19:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1935945 00:24:17.508 05:19:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:17.508 05:19:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:17.508 05:19:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1935945' 00:24:17.508 killing process with pid 1935945 00:24:17.508 05:19:53 -- common/autotest_common.sh@955 -- # kill 1935945 00:24:17.508 05:19:53 -- common/autotest_common.sh@960 -- # wait 1935945 00:24:17.508 05:19:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.508 05:19:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.508 05:19:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.508 05:19:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.508 05:19:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.508 05:19:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.508 05:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.508 05:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.448 05:19:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.448 00:24:18.448 real 1m8.127s 00:24:18.448 user 4m10.813s 00:24:18.448 sys 0m6.708s 00:24:18.448 05:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:18.448 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:24:18.448 ************************************ 00:24:18.448 END TEST nvmf_initiator_timeout 00:24:18.448 ************************************ 00:24:18.448 05:19:55 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:24:18.448 05:19:55 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:24:18.448 05:19:55 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:24:18.448 05:19:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.448 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:24:20.351 05:19:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:20.351 05:19:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.351 05:19:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.351 05:19:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.351 05:19:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.351 05:19:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.351 05:19:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.351 05:19:57 -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.351 05:19:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.351 05:19:57 -- nvmf/common.sh@296 -- # e810=() 00:24:20.351 05:19:57 -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.351 05:19:57 -- nvmf/common.sh@297 -- # x722=() 00:24:20.351 05:19:57 -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.351 05:19:57 -- nvmf/common.sh@298 -- # mlx=() 00:24:20.351 05:19:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.351 05:19:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.351 05:19:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.351 05:19:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.351 05:19:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.351 05:19:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.351 05:19:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:20.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:20.351 05:19:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.351 05:19:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:20.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:20.351 05:19:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.351 05:19:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.351 05:19:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.351 05:19:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:20.351 05:19:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.351 05:19:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:20.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:20.351 05:19:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.351 05:19:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.351 05:19:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.351 05:19:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:20.351 05:19:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.351 05:19:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:20.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:20.351 05:19:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.351 05:19:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:20.351 05:19:57 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.351 05:19:57 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:24:20.351 05:19:57 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:20.351 05:19:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:20.351 05:19:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:20.351 05:19:57 -- common/autotest_common.sh@10 -- # set +x 00:24:20.351 ************************************ 00:24:20.351 START TEST nvmf_perf_adq 00:24:20.351 ************************************ 00:24:20.351 05:19:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:20.351 * Looking for test storage... 00:24:20.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:20.351 05:19:57 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.351 05:19:57 -- nvmf/common.sh@7 -- # uname -s 00:24:20.351 05:19:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.351 05:19:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.351 05:19:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.351 05:19:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.351 05:19:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.351 05:19:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.351 05:19:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.351 05:19:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.351 05:19:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.351 05:19:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.351 05:19:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.351 05:19:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.351 05:19:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.351 05:19:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.351 05:19:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.351 05:19:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.351 05:19:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.351 05:19:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.351 05:19:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.351 05:19:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.351 05:19:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.351 05:19:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.351 05:19:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.351 05:19:57 -- paths/export.sh@5 -- # export PATH 00:24:20.351 05:19:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.351 05:19:57 -- nvmf/common.sh@47 -- # : 0 00:24:20.351 05:19:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.351 05:19:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.351 05:19:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.351 05:19:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.351 05:19:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.351 05:19:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.351 05:19:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.352 05:19:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.352 05:19:57 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:20.352 05:19:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.352 05:19:57 -- common/autotest_common.sh@10 -- # set +x 00:24:22.255 05:19:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:22.255 05:19:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.255 05:19:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.255 05:19:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.255 05:19:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.255 05:19:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.255 05:19:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.255 05:19:59 -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.255 05:19:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.255 05:19:59 -- nvmf/common.sh@296 -- # e810=() 00:24:22.255 05:19:59 -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.255 05:19:59 -- nvmf/common.sh@297 -- # x722=() 00:24:22.255 05:19:59 -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.255 05:19:59 -- nvmf/common.sh@298 -- # mlx=() 00:24:22.255 05:19:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.255 05:19:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.255 05:19:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.255 05:19:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.255 05:19:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.255 05:19:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.255 05:19:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:22.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:22.255 05:19:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.255 05:19:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.256 05:19:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:22.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:22.256 05:19:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.256 05:19:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.256 05:19:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.256 05:19:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.256 05:19:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:22.256 05:19:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.256 05:19:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:22.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:22.256 05:19:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.256 05:19:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.256 05:19:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.256 05:19:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:22.256 05:19:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.256 05:19:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:22.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:22.256 05:19:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.256 05:19:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:22.256 05:19:59 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.256 05:19:59 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:22.256 05:19:59 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:22.256 05:19:59 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:22.256 05:19:59 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:22.821 05:20:00 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:24.726 05:20:01 -- target/perf_adq.sh@54 -- # sleep 5 00:24:30.006 05:20:06 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:30.006 05:20:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:30.006 05:20:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.006 05:20:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:30.006 05:20:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:30.006 05:20:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:30.006 05:20:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.006 05:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.006 05:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.006 05:20:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:30.006 05:20:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:30.006 05:20:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.006 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:24:30.006 05:20:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:30.006 05:20:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.006 05:20:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.006 05:20:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.006 05:20:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.006 05:20:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.006 05:20:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.006 05:20:06 -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.006 05:20:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.006 05:20:06 -- nvmf/common.sh@296 -- # e810=() 00:24:30.006 05:20:06 -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.006 05:20:06 -- nvmf/common.sh@297 -- # x722=() 00:24:30.006 05:20:06 -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.006 05:20:06 -- nvmf/common.sh@298 -- # mlx=() 00:24:30.006 05:20:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.006 05:20:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.006 05:20:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.006 05:20:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.006 05:20:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.006 05:20:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.006 05:20:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.006 05:20:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.006 05:20:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.006 05:20:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:30.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:30.006 05:20:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.006 05:20:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.006 05:20:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.006 05:20:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.006 05:20:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.006 05:20:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.006 05:20:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:30.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:30.006 05:20:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.007 05:20:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.007 05:20:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.007 05:20:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.007 05:20:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.007 05:20:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:30.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:30.007 05:20:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.007 05:20:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.007 05:20:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.007 05:20:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.007 05:20:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.007 05:20:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:30.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:30.007 05:20:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.007 05:20:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:30.007 05:20:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:30.007 05:20:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:30.007 05:20:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.007 05:20:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.007 05:20:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.007 05:20:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.007 05:20:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.007 05:20:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.007 05:20:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.007 05:20:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.007 05:20:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.007 05:20:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.007 05:20:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.007 05:20:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.007 05:20:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.007 05:20:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.007 05:20:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.007 05:20:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.007 05:20:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.007 05:20:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.007 05:20:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.007 05:20:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:30.007 00:24:30.007 --- 10.0.0.2 ping statistics --- 00:24:30.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.007 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:30.007 05:20:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:24:30.007 00:24:30.007 --- 10.0.0.1 ping statistics --- 00:24:30.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.007 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:30.007 05:20:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.007 05:20:07 -- nvmf/common.sh@411 -- # return 0 00:24:30.007 05:20:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:30.007 05:20:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.007 05:20:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:30.007 05:20:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.007 05:20:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:30.007 05:20:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:30.007 05:20:07 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:30.007 05:20:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:30.007 05:20:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.007 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.007 05:20:07 -- nvmf/common.sh@470 -- # nvmfpid=1947977 00:24:30.007 05:20:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:30.007 05:20:07 -- nvmf/common.sh@471 -- # waitforlisten 1947977 00:24:30.007 05:20:07 -- common/autotest_common.sh@817 -- # '[' -z 1947977 ']' 00:24:30.007 05:20:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.007 05:20:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.007 05:20:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.007 05:20:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.007 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.007 [2024-04-24 05:20:07.223522] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:24:30.007 [2024-04-24 05:20:07.223592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.007 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.007 [2024-04-24 05:20:07.262411] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:30.267 [2024-04-24 05:20:07.289258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.267 [2024-04-24 05:20:07.374217] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.267 [2024-04-24 05:20:07.374268] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.267 [2024-04-24 05:20:07.374290] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.267 [2024-04-24 05:20:07.374302] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.267 [2024-04-24 05:20:07.374314] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.267 [2024-04-24 05:20:07.374369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.267 [2024-04-24 05:20:07.374393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.267 [2024-04-24 05:20:07.374439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.267 [2024-04-24 05:20:07.374441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.267 05:20:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.267 05:20:07 -- common/autotest_common.sh@850 -- # return 0 00:24:30.267 05:20:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:30.267 05:20:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.267 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.267 05:20:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.267 05:20:07 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:30.267 05:20:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:30.267 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.267 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.267 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.267 05:20:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:30.267 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.267 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.530 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.530 05:20:07 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:30.530 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.530 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.530 [2024-04-24 05:20:07.566553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.530 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.530 05:20:07 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:30.530 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.530 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.530 Malloc1 00:24:30.530 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.530 05:20:07 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.530 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.530 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.530 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.530 05:20:07 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:30.530 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.530 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.530 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.530 05:20:07 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.531 05:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.531 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.531 [2024-04-24 05:20:07.619867] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.531 05:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.531 05:20:07 -- target/perf_adq.sh@73 -- # perfpid=1948006 00:24:30.531 05:20:07 -- target/perf_adq.sh@74 -- # sleep 2 00:24:30.531 05:20:07 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:30.531 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.451 05:20:09 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:32.451 05:20:09 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:32.451 05:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.451 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:24:32.451 05:20:09 -- target/perf_adq.sh@76 -- # wc -l 00:24:32.451 05:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.451 05:20:09 -- target/perf_adq.sh@76 -- # count=4 00:24:32.451 05:20:09 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:32.451 05:20:09 -- target/perf_adq.sh@81 -- # wait 1948006 00:24:40.565 Initializing NVMe Controllers 00:24:40.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:40.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:40.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:40.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:40.565 Initialization complete. Launching workers. 00:24:40.565 ======================================================== 00:24:40.565 Latency(us) 00:24:40.565 Device Information : IOPS MiB/s Average min max 00:24:40.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9835.90 38.42 6506.21 2116.45 10333.04 00:24:40.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10325.20 40.33 6199.36 2240.39 10276.50 00:24:40.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10565.10 41.27 6059.94 4216.46 7803.88 00:24:40.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9644.80 37.67 6637.84 5839.54 7857.21 00:24:40.566 ======================================================== 00:24:40.566 Total : 40370.99 157.70 6342.39 2116.45 10333.04 00:24:40.566 00:24:40.566 05:20:17 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:40.566 05:20:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:40.566 05:20:17 -- nvmf/common.sh@117 -- # sync 00:24:40.566 05:20:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.566 05:20:17 -- nvmf/common.sh@120 -- # set +e 00:24:40.566 05:20:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.566 05:20:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.566 rmmod nvme_tcp 00:24:40.566 rmmod nvme_fabrics 00:24:40.566 rmmod nvme_keyring 00:24:40.566 05:20:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.566 05:20:17 -- nvmf/common.sh@124 -- # set -e 00:24:40.566 05:20:17 -- nvmf/common.sh@125 -- # return 0 00:24:40.566 05:20:17 -- nvmf/common.sh@478 -- # '[' -n 1947977 ']' 00:24:40.566 05:20:17 -- nvmf/common.sh@479 -- # killprocess 1947977 00:24:40.566 05:20:17 -- common/autotest_common.sh@936 -- # '[' -z 1947977 ']' 00:24:40.566 05:20:17 -- common/autotest_common.sh@940 -- # kill -0 1947977 00:24:40.566 05:20:17 -- common/autotest_common.sh@941 -- # uname 00:24:40.566 05:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:40.566 05:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1947977 00:24:40.825 05:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:40.825 05:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:40.825 05:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1947977' 00:24:40.825 killing process with pid 1947977 00:24:40.825 05:20:17 -- common/autotest_common.sh@955 -- # kill 1947977 00:24:40.825 05:20:17 -- common/autotest_common.sh@960 -- # wait 1947977 00:24:41.086 05:20:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:41.086 05:20:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:41.086 05:20:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:41.086 05:20:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.086 05:20:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.086 05:20:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.086 05:20:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.086 05:20:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.994 05:20:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.994 05:20:20 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:42.994 05:20:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:43.561 05:20:20 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:45.462 05:20:22 -- target/perf_adq.sh@54 -- # sleep 5 00:24:50.735 05:20:27 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:50.735 05:20:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:50.735 05:20:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.735 05:20:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:50.735 05:20:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:50.735 05:20:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:50.735 05:20:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.735 05:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.735 05:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.735 05:20:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:50.735 05:20:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.735 05:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 05:20:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:50.735 05:20:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.735 05:20:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.735 05:20:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.735 05:20:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.735 05:20:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.735 05:20:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.735 05:20:27 -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.735 05:20:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.735 05:20:27 -- nvmf/common.sh@296 -- # e810=() 00:24:50.735 05:20:27 -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.735 05:20:27 -- nvmf/common.sh@297 -- # x722=() 00:24:50.735 05:20:27 -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.735 05:20:27 -- nvmf/common.sh@298 -- # mlx=() 00:24:50.735 05:20:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.735 05:20:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.735 05:20:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.735 05:20:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:50.735 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:50.735 05:20:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.735 05:20:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:50.735 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:50.735 05:20:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.735 05:20:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.735 05:20:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.735 05:20:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:50.735 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:50.735 05:20:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.735 05:20:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.735 05:20:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.735 05:20:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:50.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:50.735 05:20:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:50.735 05:20:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:50.735 05:20:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:50.735 05:20:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.735 05:20:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.735 05:20:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.735 05:20:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.735 05:20:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.735 05:20:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.735 05:20:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.735 05:20:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.735 05:20:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.735 05:20:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.735 05:20:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.735 05:20:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.735 05:20:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.735 05:20:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.735 05:20:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.735 05:20:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.735 05:20:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.735 05:20:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.735 05:20:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:24:50.735 00:24:50.735 --- 10.0.0.2 ping statistics --- 00:24:50.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.735 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:50.735 05:20:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:24:50.735 00:24:50.735 --- 10.0.0.1 ping statistics --- 00:24:50.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.735 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:50.735 05:20:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.735 05:20:27 -- nvmf/common.sh@411 -- # return 0 00:24:50.736 05:20:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:50.736 05:20:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.736 05:20:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:50.736 05:20:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:50.736 05:20:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.736 05:20:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:50.736 05:20:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:50.736 05:20:27 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:50.736 05:20:27 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:50.736 05:20:27 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:50.736 05:20:27 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:50.736 net.core.busy_poll = 1 00:24:50.736 05:20:27 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:50.736 net.core.busy_read = 1 00:24:50.736 05:20:27 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:50.736 05:20:27 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:50.736 05:20:27 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:50.736 05:20:27 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:50.736 05:20:27 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:50.994 05:20:28 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:50.994 05:20:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:50.994 05:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:50.994 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:50.994 05:20:28 -- nvmf/common.sh@470 -- # nvmfpid=1950624 00:24:50.994 05:20:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:50.994 05:20:28 -- nvmf/common.sh@471 -- # waitforlisten 1950624 00:24:50.994 05:20:28 -- common/autotest_common.sh@817 -- # '[' -z 1950624 ']' 00:24:50.994 05:20:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.994 05:20:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:50.994 05:20:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.994 05:20:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:50.994 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:50.994 [2024-04-24 05:20:28.073853] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:24:50.994 [2024-04-24 05:20:28.073927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.994 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.994 [2024-04-24 05:20:28.112479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:50.994 [2024-04-24 05:20:28.141029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.994 [2024-04-24 05:20:28.228788] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.994 [2024-04-24 05:20:28.228843] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.994 [2024-04-24 05:20:28.228872] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.994 [2024-04-24 05:20:28.228884] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.994 [2024-04-24 05:20:28.228894] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.994 [2024-04-24 05:20:28.228975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.994 [2024-04-24 05:20:28.229045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.994 [2024-04-24 05:20:28.229108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.994 [2024-04-24 05:20:28.229111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.253 05:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:51.253 05:20:28 -- common/autotest_common.sh@850 -- # return 0 00:24:51.253 05:20:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:51.253 05:20:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 05:20:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.253 05:20:28 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:51.253 05:20:28 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 [2024-04-24 05:20:28.435570] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 Malloc1 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.253 05:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.253 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.253 [2024-04-24 05:20:28.488739] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.253 05:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.253 05:20:28 -- target/perf_adq.sh@94 -- # perfpid=1950765 00:24:51.253 05:20:28 -- target/perf_adq.sh@95 -- # sleep 2 00:24:51.253 05:20:28 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:51.253 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.833 05:20:30 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:53.833 05:20:30 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:53.833 05:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.833 05:20:30 -- target/perf_adq.sh@97 -- # wc -l 00:24:53.833 05:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:53.833 05:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.833 05:20:30 -- target/perf_adq.sh@97 -- # count=2 00:24:53.833 05:20:30 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:53.833 05:20:30 -- target/perf_adq.sh@103 -- # wait 1950765 00:25:01.938 Initializing NVMe Controllers 00:25:01.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:01.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:01.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:01.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:01.938 Initialization complete. Launching workers. 00:25:01.938 ======================================================== 00:25:01.938 Latency(us) 00:25:01.938 Device Information : IOPS MiB/s Average min max 00:25:01.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4755.50 18.58 13461.11 1670.72 63221.16 00:25:01.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13302.20 51.96 4811.97 1397.10 7235.15 00:25:01.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4413.60 17.24 14506.14 1741.06 59594.62 00:25:01.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4193.00 16.38 15270.13 2731.99 61052.14 00:25:01.938 ======================================================== 00:25:01.938 Total : 26664.29 104.16 9603.70 1397.10 63221.16 00:25:01.939 00:25:01.939 05:20:38 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:01.939 05:20:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:01.939 05:20:38 -- nvmf/common.sh@117 -- # sync 00:25:01.939 05:20:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.939 05:20:38 -- nvmf/common.sh@120 -- # set +e 00:25:01.939 05:20:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.939 05:20:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.939 rmmod nvme_tcp 00:25:01.939 rmmod nvme_fabrics 00:25:01.939 rmmod nvme_keyring 00:25:01.939 05:20:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.939 05:20:38 -- nvmf/common.sh@124 -- # set -e 00:25:01.939 05:20:38 -- nvmf/common.sh@125 -- # return 0 00:25:01.939 05:20:38 -- nvmf/common.sh@478 -- # '[' -n 1950624 ']' 00:25:01.939 05:20:38 -- nvmf/common.sh@479 -- # killprocess 1950624 00:25:01.939 05:20:38 -- common/autotest_common.sh@936 -- # '[' -z 1950624 ']' 00:25:01.939 05:20:38 -- common/autotest_common.sh@940 -- # kill -0 1950624 00:25:01.939 05:20:38 -- common/autotest_common.sh@941 -- # uname 00:25:01.939 05:20:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:01.939 05:20:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1950624 00:25:01.939 05:20:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:01.939 05:20:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:01.939 05:20:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1950624' 00:25:01.939 killing process with pid 1950624 00:25:01.939 05:20:38 -- common/autotest_common.sh@955 -- # kill 1950624 00:25:01.939 05:20:38 -- common/autotest_common.sh@960 -- # wait 1950624 00:25:01.939 05:20:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:01.939 05:20:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:01.939 05:20:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:01.939 05:20:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.939 05:20:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.939 05:20:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.939 05:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.939 05:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.842 05:20:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.842 05:20:41 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:03.842 00:25:03.842 real 0m43.594s 00:25:03.842 user 2m39.939s 00:25:03.842 sys 0m9.016s 00:25:03.842 05:20:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:03.842 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:03.842 ************************************ 00:25:03.842 END TEST nvmf_perf_adq 00:25:03.842 ************************************ 00:25:03.842 05:20:41 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:03.842 05:20:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:03.842 05:20:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.842 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:04.100 ************************************ 00:25:04.100 START TEST nvmf_shutdown 00:25:04.100 ************************************ 00:25:04.100 05:20:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:04.100 * Looking for test storage... 00:25:04.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.100 05:20:41 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.100 05:20:41 -- nvmf/common.sh@7 -- # uname -s 00:25:04.100 05:20:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.100 05:20:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.100 05:20:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.100 05:20:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.100 05:20:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.100 05:20:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.100 05:20:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.100 05:20:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.100 05:20:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.100 05:20:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.100 05:20:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.100 05:20:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.100 05:20:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.100 05:20:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.100 05:20:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.100 05:20:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.100 05:20:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.100 05:20:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.100 05:20:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.100 05:20:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.100 05:20:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.100 05:20:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.100 05:20:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.100 05:20:41 -- paths/export.sh@5 -- # export PATH 00:25:04.100 05:20:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.100 05:20:41 -- nvmf/common.sh@47 -- # : 0 00:25:04.100 05:20:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.100 05:20:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.100 05:20:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.100 05:20:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.100 05:20:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.100 05:20:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.100 05:20:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.100 05:20:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.100 05:20:41 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:04.100 05:20:41 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:04.101 05:20:41 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:04.101 05:20:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:04.101 05:20:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:04.101 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:04.101 ************************************ 00:25:04.101 START TEST nvmf_shutdown_tc1 00:25:04.101 ************************************ 00:25:04.101 05:20:41 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:25:04.101 05:20:41 -- target/shutdown.sh@74 -- # starttarget 00:25:04.101 05:20:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:04.101 05:20:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:04.101 05:20:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.101 05:20:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:04.101 05:20:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:04.101 05:20:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:04.101 05:20:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.101 05:20:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.101 05:20:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.101 05:20:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:04.101 05:20:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:04.101 05:20:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.101 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.003 05:20:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:06.003 05:20:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.003 05:20:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.003 05:20:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.003 05:20:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.003 05:20:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.003 05:20:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.003 05:20:43 -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.003 05:20:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.003 05:20:43 -- nvmf/common.sh@296 -- # e810=() 00:25:06.003 05:20:43 -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.003 05:20:43 -- nvmf/common.sh@297 -- # x722=() 00:25:06.003 05:20:43 -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.003 05:20:43 -- nvmf/common.sh@298 -- # mlx=() 00:25:06.003 05:20:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.003 05:20:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.003 05:20:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.003 05:20:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.003 05:20:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.003 05:20:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.003 05:20:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.003 05:20:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.003 05:20:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.003 05:20:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.003 05:20:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.003 05:20:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.003 05:20:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.003 05:20:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.003 05:20:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.003 05:20:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.003 05:20:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.003 05:20:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.003 05:20:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:06.003 05:20:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:06.003 05:20:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:06.003 05:20:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.003 05:20:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.003 05:20:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.003 05:20:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.003 05:20:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.003 05:20:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.003 05:20:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.003 05:20:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.003 05:20:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.003 05:20:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.003 05:20:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.003 05:20:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.003 05:20:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.262 05:20:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.262 05:20:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.262 05:20:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.262 05:20:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.262 05:20:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.262 05:20:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.262 05:20:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:06.262 00:25:06.262 --- 10.0.0.2 ping statistics --- 00:25:06.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.262 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:06.262 05:20:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:25:06.262 00:25:06.262 --- 10.0.0.1 ping statistics --- 00:25:06.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.262 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:25:06.262 05:20:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.262 05:20:43 -- nvmf/common.sh@411 -- # return 0 00:25:06.262 05:20:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:06.262 05:20:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.262 05:20:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:06.262 05:20:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:06.262 05:20:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.262 05:20:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:06.262 05:20:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:06.262 05:20:43 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:06.262 05:20:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:06.262 05:20:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:06.262 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.262 05:20:43 -- nvmf/common.sh@470 -- # nvmfpid=1953939 00:25:06.262 05:20:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:06.262 05:20:43 -- nvmf/common.sh@471 -- # waitforlisten 1953939 00:25:06.262 05:20:43 -- common/autotest_common.sh@817 -- # '[' -z 1953939 ']' 00:25:06.262 05:20:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.262 05:20:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:06.262 05:20:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.262 05:20:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:06.262 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.262 [2024-04-24 05:20:43.449388] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:06.262 [2024-04-24 05:20:43.449468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.262 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.262 [2024-04-24 05:20:43.486495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:06.262 [2024-04-24 05:20:43.518587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.520 [2024-04-24 05:20:43.608378] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.521 [2024-04-24 05:20:43.608440] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.521 [2024-04-24 05:20:43.608457] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.521 [2024-04-24 05:20:43.608471] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.521 [2024-04-24 05:20:43.608483] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.521 [2024-04-24 05:20:43.608571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.521 [2024-04-24 05:20:43.608749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.521 [2024-04-24 05:20:43.608820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.521 [2024-04-24 05:20:43.608823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.521 05:20:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.521 05:20:43 -- common/autotest_common.sh@850 -- # return 0 00:25:06.521 05:20:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:06.521 05:20:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.521 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.521 05:20:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.521 05:20:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.521 05:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.521 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.521 [2024-04-24 05:20:43.766500] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.521 05:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.521 05:20:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:06.521 05:20:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:06.521 05:20:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:06.521 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.521 05:20:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.521 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.521 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.521 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.521 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.521 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.521 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.521 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.521 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.779 05:20:43 -- target/shutdown.sh@28 -- # cat 00:25:06.779 05:20:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:06.779 05:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.779 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.779 Malloc1 00:25:06.779 [2024-04-24 05:20:43.856224] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.779 Malloc2 00:25:06.779 Malloc3 00:25:06.779 Malloc4 00:25:06.779 Malloc5 00:25:07.037 Malloc6 00:25:07.037 Malloc7 00:25:07.037 Malloc8 00:25:07.037 Malloc9 00:25:07.037 Malloc10 00:25:07.037 05:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.037 05:20:44 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:07.037 05:20:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:07.037 05:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.296 05:20:44 -- target/shutdown.sh@78 -- # perfpid=1954119 00:25:07.296 05:20:44 -- target/shutdown.sh@79 -- # waitforlisten 1954119 /var/tmp/bdevperf.sock 00:25:07.296 05:20:44 -- common/autotest_common.sh@817 -- # '[' -z 1954119 ']' 00:25:07.296 05:20:44 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:07.296 05:20:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.296 05:20:44 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:07.296 05:20:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:07.296 05:20:44 -- nvmf/common.sh@521 -- # config=() 00:25:07.296 05:20:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.296 05:20:44 -- nvmf/common.sh@521 -- # local subsystem config 00:25:07.296 05:20:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:07.296 05:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.296 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.296 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.296 { 00:25:07.296 "params": { 00:25:07.296 "name": "Nvme$subsystem", 00:25:07.296 "trtype": "$TEST_TRANSPORT", 00:25:07.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.296 "adrfam": "ipv4", 00:25:07.296 "trsvcid": "$NVMF_PORT", 00:25:07.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.296 "hdgst": ${hdgst:-false}, 00:25:07.296 "ddgst": ${ddgst:-false} 00:25:07.296 }, 00:25:07.296 "method": "bdev_nvme_attach_controller" 00:25:07.296 } 00:25:07.296 EOF 00:25:07.296 )") 00:25:07.296 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.296 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.296 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.296 { 00:25:07.296 "params": { 00:25:07.296 "name": "Nvme$subsystem", 00:25:07.296 "trtype": "$TEST_TRANSPORT", 00:25:07.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.296 "adrfam": "ipv4", 00:25:07.296 "trsvcid": "$NVMF_PORT", 00:25:07.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.296 "hdgst": ${hdgst:-false}, 00:25:07.296 "ddgst": ${ddgst:-false} 00:25:07.296 }, 00:25:07.296 "method": "bdev_nvme_attach_controller" 00:25:07.296 } 00:25:07.296 EOF 00:25:07.296 )") 00:25:07.296 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.296 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:07.297 { 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme$subsystem", 00:25:07.297 "trtype": "$TEST_TRANSPORT", 00:25:07.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "$NVMF_PORT", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.297 "hdgst": ${hdgst:-false}, 00:25:07.297 "ddgst": ${ddgst:-false} 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 } 00:25:07.297 EOF 00:25:07.297 )") 00:25:07.297 05:20:44 -- nvmf/common.sh@543 -- # cat 00:25:07.297 05:20:44 -- nvmf/common.sh@545 -- # jq . 00:25:07.297 05:20:44 -- nvmf/common.sh@546 -- # IFS=, 00:25:07.297 05:20:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme1", 00:25:07.297 "trtype": "tcp", 00:25:07.297 "traddr": "10.0.0.2", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "4420", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.297 "hdgst": false, 00:25:07.297 "ddgst": false 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 },{ 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme2", 00:25:07.297 "trtype": "tcp", 00:25:07.297 "traddr": "10.0.0.2", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "4420", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:07.297 "hdgst": false, 00:25:07.297 "ddgst": false 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 },{ 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme3", 00:25:07.297 "trtype": "tcp", 00:25:07.297 "traddr": "10.0.0.2", 00:25:07.297 "adrfam": "ipv4", 00:25:07.297 "trsvcid": "4420", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:07.297 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:07.297 "hdgst": false, 00:25:07.297 "ddgst": false 00:25:07.297 }, 00:25:07.297 "method": "bdev_nvme_attach_controller" 00:25:07.297 },{ 00:25:07.297 "params": { 00:25:07.297 "name": "Nvme4", 00:25:07.297 "trtype": "tcp", 00:25:07.297 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme5", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme6", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme7", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme8", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme9", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 },{ 00:25:07.298 "params": { 00:25:07.298 "name": "Nvme10", 00:25:07.298 "trtype": "tcp", 00:25:07.298 "traddr": "10.0.0.2", 00:25:07.298 "adrfam": "ipv4", 00:25:07.298 "trsvcid": "4420", 00:25:07.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:07.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:07.298 "hdgst": false, 00:25:07.298 "ddgst": false 00:25:07.298 }, 00:25:07.298 "method": "bdev_nvme_attach_controller" 00:25:07.298 }' 00:25:07.298 [2024-04-24 05:20:44.357526] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:07.298 [2024-04-24 05:20:44.357607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:07.298 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.298 [2024-04-24 05:20:44.393604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:07.298 [2024-04-24 05:20:44.422841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.298 [2024-04-24 05:20:44.507827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.197 05:20:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:09.197 05:20:46 -- common/autotest_common.sh@850 -- # return 0 00:25:09.197 05:20:46 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:09.197 05:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.197 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 05:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.197 05:20:46 -- target/shutdown.sh@83 -- # kill -9 1954119 00:25:09.197 05:20:46 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:09.197 05:20:46 -- target/shutdown.sh@87 -- # sleep 1 00:25:10.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1954119 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:10.131 05:20:47 -- target/shutdown.sh@88 -- # kill -0 1953939 00:25:10.131 05:20:47 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:10.131 05:20:47 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:10.131 05:20:47 -- nvmf/common.sh@521 -- # config=() 00:25:10.131 05:20:47 -- nvmf/common.sh@521 -- # local subsystem config 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.131 "trsvcid": "$NVMF_PORT", 00:25:10.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.131 "hdgst": ${hdgst:-false}, 00:25:10.131 "ddgst": ${ddgst:-false} 00:25:10.131 }, 00:25:10.131 "method": "bdev_nvme_attach_controller" 00:25:10.131 } 00:25:10.131 EOF 00:25:10.131 )") 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.131 05:20:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:10.131 05:20:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:10.131 { 00:25:10.131 "params": { 00:25:10.131 "name": "Nvme$subsystem", 00:25:10.131 "trtype": "$TEST_TRANSPORT", 00:25:10.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.131 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "$NVMF_PORT", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.132 "hdgst": ${hdgst:-false}, 00:25:10.132 "ddgst": ${ddgst:-false} 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 } 00:25:10.132 EOF 00:25:10.132 )") 00:25:10.132 05:20:47 -- nvmf/common.sh@543 -- # cat 00:25:10.132 05:20:47 -- nvmf/common.sh@545 -- # jq . 00:25:10.132 05:20:47 -- nvmf/common.sh@546 -- # IFS=, 00:25:10.132 05:20:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme1", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme2", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme3", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme4", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme5", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme6", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme7", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme8", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme9", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 },{ 00:25:10.132 "params": { 00:25:10.132 "name": "Nvme10", 00:25:10.132 "trtype": "tcp", 00:25:10.132 "traddr": "10.0.0.2", 00:25:10.132 "adrfam": "ipv4", 00:25:10.132 "trsvcid": "4420", 00:25:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:10.132 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:10.132 "hdgst": false, 00:25:10.132 "ddgst": false 00:25:10.132 }, 00:25:10.132 "method": "bdev_nvme_attach_controller" 00:25:10.132 }' 00:25:10.132 [2024-04-24 05:20:47.369028] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:10.132 [2024-04-24 05:20:47.369122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954417 ] 00:25:10.390 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.390 [2024-04-24 05:20:47.408415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:10.390 [2024-04-24 05:20:47.437404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.390 [2024-04-24 05:20:47.524807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.291 Running I/O for 1 seconds... 00:25:13.231 00:25:13.231 Latency(us) 00:25:13.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme1n1 : 1.15 222.68 13.92 0.00 0.00 284488.06 23010.42 253211.69 00:25:13.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme2n1 : 1.09 238.56 14.91 0.00 0.00 258091.04 10971.21 253211.69 00:25:13.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme3n1 : 1.09 240.53 15.03 0.00 0.00 252103.95 5922.51 256318.58 00:25:13.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme4n1 : 1.12 228.37 14.27 0.00 0.00 263768.56 19709.35 257872.02 00:25:13.231 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme5n1 : 1.14 223.64 13.98 0.00 0.00 265099.57 22524.97 268746.15 00:25:13.231 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme6n1 : 1.14 225.38 14.09 0.00 0.00 258397.11 19029.71 254765.13 00:25:13.231 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme7n1 : 1.19 269.62 16.85 0.00 0.00 213267.49 16796.63 256318.58 00:25:13.231 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme8n1 : 1.15 222.06 13.88 0.00 0.00 253733.55 23398.78 257872.02 00:25:13.231 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.231 Nvme9n1 : 1.18 220.06 13.75 0.00 0.00 252283.21 2621.44 285834.05 00:25:13.231 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:13.231 Verification LBA range: start 0x0 length 0x400 00:25:13.232 Nvme10n1 : 1.20 267.28 16.71 0.00 0.00 204688.12 9466.31 267192.70 00:25:13.232 =================================================================================================================== 00:25:13.232 Total : 2358.18 147.39 0.00 0.00 248641.16 2621.44 285834.05 00:25:13.489 05:20:50 -- target/shutdown.sh@94 -- # stoptarget 00:25:13.489 05:20:50 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:13.489 05:20:50 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:13.489 05:20:50 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:13.489 05:20:50 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:13.489 05:20:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:13.489 05:20:50 -- nvmf/common.sh@117 -- # sync 00:25:13.489 05:20:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.489 05:20:50 -- nvmf/common.sh@120 -- # set +e 00:25:13.489 05:20:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.489 05:20:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.489 rmmod nvme_tcp 00:25:13.489 rmmod nvme_fabrics 00:25:13.489 rmmod nvme_keyring 00:25:13.489 05:20:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.489 05:20:50 -- nvmf/common.sh@124 -- # set -e 00:25:13.489 05:20:50 -- nvmf/common.sh@125 -- # return 0 00:25:13.489 05:20:50 -- nvmf/common.sh@478 -- # '[' -n 1953939 ']' 00:25:13.489 05:20:50 -- nvmf/common.sh@479 -- # killprocess 1953939 00:25:13.489 05:20:50 -- common/autotest_common.sh@936 -- # '[' -z 1953939 ']' 00:25:13.489 05:20:50 -- common/autotest_common.sh@940 -- # kill -0 1953939 00:25:13.489 05:20:50 -- common/autotest_common.sh@941 -- # uname 00:25:13.489 05:20:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.490 05:20:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1953939 00:25:13.490 05:20:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.490 05:20:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.490 05:20:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1953939' 00:25:13.490 killing process with pid 1953939 00:25:13.490 05:20:50 -- common/autotest_common.sh@955 -- # kill 1953939 00:25:13.490 05:20:50 -- common/autotest_common.sh@960 -- # wait 1953939 00:25:14.055 05:20:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:14.055 05:20:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:14.055 05:20:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:14.055 05:20:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.055 05:20:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.055 05:20:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.055 05:20:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.055 05:20:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.584 05:20:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.584 00:25:16.584 real 0m11.981s 00:25:16.584 user 0m35.206s 00:25:16.584 sys 0m3.163s 00:25:16.584 05:20:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:16.584 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.584 ************************************ 00:25:16.584 END TEST nvmf_shutdown_tc1 00:25:16.584 ************************************ 00:25:16.584 05:20:53 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:16.584 05:20:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:16.584 05:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.584 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.584 ************************************ 00:25:16.584 START TEST nvmf_shutdown_tc2 00:25:16.584 ************************************ 00:25:16.584 05:20:53 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:25:16.584 05:20:53 -- target/shutdown.sh@99 -- # starttarget 00:25:16.584 05:20:53 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:16.584 05:20:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:16.584 05:20:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.584 05:20:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:16.584 05:20:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:16.584 05:20:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:16.584 05:20:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.584 05:20:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.584 05:20:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.584 05:20:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:16.584 05:20:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.584 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.584 05:20:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:16.584 05:20:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.584 05:20:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.584 05:20:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.584 05:20:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.584 05:20:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.584 05:20:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.584 05:20:53 -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.584 05:20:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.584 05:20:53 -- nvmf/common.sh@296 -- # e810=() 00:25:16.584 05:20:53 -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.584 05:20:53 -- nvmf/common.sh@297 -- # x722=() 00:25:16.584 05:20:53 -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.584 05:20:53 -- nvmf/common.sh@298 -- # mlx=() 00:25:16.584 05:20:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.584 05:20:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.584 05:20:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.584 05:20:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.584 05:20:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.584 05:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:16.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:16.584 05:20:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.584 05:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:16.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:16.584 05:20:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.584 05:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.584 05:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.584 05:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:16.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:16.584 05:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.584 05:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.584 05:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.584 05:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.584 05:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:16.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:16.584 05:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.584 05:20:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:16.584 05:20:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:16.584 05:20:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:16.584 05:20:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.584 05:20:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.584 05:20:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.584 05:20:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.584 05:20:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.584 05:20:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.584 05:20:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.584 05:20:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.584 05:20:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.584 05:20:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.584 05:20:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.584 05:20:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.584 05:20:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.584 05:20:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.584 05:20:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.584 05:20:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.584 05:20:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.584 05:20:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.584 05:20:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.584 05:20:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:25:16.584 00:25:16.584 --- 10.0.0.2 ping statistics --- 00:25:16.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.584 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:16.584 05:20:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:16.584 00:25:16.584 --- 10.0.0.1 ping statistics --- 00:25:16.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.584 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:16.585 05:20:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.585 05:20:53 -- nvmf/common.sh@411 -- # return 0 00:25:16.585 05:20:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:16.585 05:20:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.585 05:20:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:16.585 05:20:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:16.585 05:20:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.585 05:20:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:16.585 05:20:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:16.585 05:20:53 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:16.585 05:20:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:16.585 05:20:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:16.585 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.585 05:20:53 -- nvmf/common.sh@470 -- # nvmfpid=1955313 00:25:16.585 05:20:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:16.585 05:20:53 -- nvmf/common.sh@471 -- # waitforlisten 1955313 00:25:16.585 05:20:53 -- common/autotest_common.sh@817 -- # '[' -z 1955313 ']' 00:25:16.585 05:20:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.585 05:20:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.585 05:20:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.585 05:20:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.585 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.585 [2024-04-24 05:20:53.636381] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:16.585 [2024-04-24 05:20:53.636454] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.585 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.585 [2024-04-24 05:20:53.673336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:16.585 [2024-04-24 05:20:53.700225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.585 [2024-04-24 05:20:53.783510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.585 [2024-04-24 05:20:53.783566] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.585 [2024-04-24 05:20:53.783588] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.585 [2024-04-24 05:20:53.783599] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.585 [2024-04-24 05:20:53.783609] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.585 [2024-04-24 05:20:53.783708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.585 [2024-04-24 05:20:53.783771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.585 [2024-04-24 05:20:53.783840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.585 [2024-04-24 05:20:53.783842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.843 05:20:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.843 05:20:53 -- common/autotest_common.sh@850 -- # return 0 00:25:16.843 05:20:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:16.843 05:20:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:16.843 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.843 05:20:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.843 05:20:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.843 05:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.843 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.843 [2024-04-24 05:20:53.922278] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.843 05:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.843 05:20:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:16.843 05:20:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:16.843 05:20:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:16.843 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.843 05:20:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.843 05:20:53 -- target/shutdown.sh@28 -- # cat 00:25:16.843 05:20:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:16.843 05:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.843 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.843 Malloc1 00:25:16.843 [2024-04-24 05:20:53.999312] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.843 Malloc2 00:25:16.843 Malloc3 00:25:17.101 Malloc4 00:25:17.101 Malloc5 00:25:17.101 Malloc6 00:25:17.101 Malloc7 00:25:17.101 Malloc8 00:25:17.359 Malloc9 00:25:17.359 Malloc10 00:25:17.359 05:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.359 05:20:54 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:17.359 05:20:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:17.359 05:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 05:20:54 -- target/shutdown.sh@103 -- # perfpid=1955489 00:25:17.359 05:20:54 -- target/shutdown.sh@104 -- # waitforlisten 1955489 /var/tmp/bdevperf.sock 00:25:17.359 05:20:54 -- common/autotest_common.sh@817 -- # '[' -z 1955489 ']' 00:25:17.359 05:20:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.360 05:20:54 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:17.360 05:20:54 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:17.360 05:20:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.360 05:20:54 -- nvmf/common.sh@521 -- # config=() 00:25:17.360 05:20:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.360 05:20:54 -- nvmf/common.sh@521 -- # local subsystem config 00:25:17.360 05:20:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:17.360 { 00:25:17.360 "params": { 00:25:17.360 "name": "Nvme$subsystem", 00:25:17.360 "trtype": "$TEST_TRANSPORT", 00:25:17.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.360 "adrfam": "ipv4", 00:25:17.360 "trsvcid": "$NVMF_PORT", 00:25:17.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.360 "hdgst": ${hdgst:-false}, 00:25:17.360 "ddgst": ${ddgst:-false} 00:25:17.360 }, 00:25:17.360 "method": "bdev_nvme_attach_controller" 00:25:17.360 } 00:25:17.360 EOF 00:25:17.360 )") 00:25:17.360 05:20:54 -- nvmf/common.sh@543 -- # cat 00:25:17.360 05:20:54 -- nvmf/common.sh@545 -- # jq . 00:25:17.360 05:20:54 -- nvmf/common.sh@546 -- # IFS=, 00:25:17.360 05:20:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:17.360 "params": { 00:25:17.361 "name": "Nvme1", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme2", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme3", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme4", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme5", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme6", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme7", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme8", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme9", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 },{ 00:25:17.361 "params": { 00:25:17.361 "name": "Nvme10", 00:25:17.361 "trtype": "tcp", 00:25:17.361 "traddr": "10.0.0.2", 00:25:17.361 "adrfam": "ipv4", 00:25:17.361 "trsvcid": "4420", 00:25:17.361 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:17.361 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:17.361 "hdgst": false, 00:25:17.361 "ddgst": false 00:25:17.361 }, 00:25:17.361 "method": "bdev_nvme_attach_controller" 00:25:17.361 }' 00:25:17.361 [2024-04-24 05:20:54.511306] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:17.361 [2024-04-24 05:20:54.511393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1955489 ] 00:25:17.361 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.361 [2024-04-24 05:20:54.547042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:17.361 [2024-04-24 05:20:54.576283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.619 [2024-04-24 05:20:54.662084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.992 Running I/O for 10 seconds... 00:25:19.249 05:20:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:19.249 05:20:56 -- common/autotest_common.sh@850 -- # return 0 00:25:19.250 05:20:56 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:19.250 05:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.250 05:20:56 -- common/autotest_common.sh@10 -- # set +x 00:25:19.250 05:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.250 05:20:56 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:19.250 05:20:56 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:19.250 05:20:56 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:19.250 05:20:56 -- target/shutdown.sh@57 -- # local ret=1 00:25:19.250 05:20:56 -- target/shutdown.sh@58 -- # local i 00:25:19.250 05:20:56 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:19.250 05:20:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:19.250 05:20:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.250 05:20:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.250 05:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.250 05:20:56 -- common/autotest_common.sh@10 -- # set +x 00:25:19.507 05:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.507 05:20:56 -- target/shutdown.sh@60 -- # read_io_count=10 00:25:19.507 05:20:56 -- target/shutdown.sh@63 -- # '[' 10 -ge 100 ']' 00:25:19.507 05:20:56 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:19.765 05:20:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:19.765 05:20:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:19.765 05:20:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.765 05:20:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.765 05:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.765 05:20:56 -- common/autotest_common.sh@10 -- # set +x 00:25:19.765 05:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.765 05:20:56 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:19.765 05:20:56 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:19.766 05:20:56 -- target/shutdown.sh@64 -- # ret=0 00:25:19.766 05:20:56 -- target/shutdown.sh@65 -- # break 00:25:19.766 05:20:56 -- target/shutdown.sh@69 -- # return 0 00:25:19.766 05:20:56 -- target/shutdown.sh@110 -- # killprocess 1955489 00:25:19.766 05:20:56 -- common/autotest_common.sh@936 -- # '[' -z 1955489 ']' 00:25:19.766 05:20:56 -- common/autotest_common.sh@940 -- # kill -0 1955489 00:25:19.766 05:20:56 -- common/autotest_common.sh@941 -- # uname 00:25:19.766 05:20:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.766 05:20:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1955489 00:25:19.766 05:20:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.766 05:20:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.766 05:20:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1955489' 00:25:19.766 killing process with pid 1955489 00:25:19.766 05:20:56 -- common/autotest_common.sh@955 -- # kill 1955489 00:25:19.766 05:20:56 -- common/autotest_common.sh@960 -- # wait 1955489 00:25:19.766 Received shutdown signal, test time was about 0.719067 seconds 00:25:19.766 00:25:19.766 Latency(us) 00:25:19.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme1n1 : 0.71 269.90 16.87 0.00 0.00 233522.82 19806.44 256318.58 00:25:19.766 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme2n1 : 0.68 189.30 11.83 0.00 0.00 323324.59 78837.38 211268.65 00:25:19.766 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme3n1 : 0.70 272.82 17.05 0.00 0.00 217863.21 17670.45 253211.69 00:25:19.766 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme4n1 : 0.70 182.15 11.38 0.00 0.00 318532.27 72235.24 270299.59 00:25:19.766 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme5n1 : 0.72 267.32 16.71 0.00 0.00 211208.98 24563.86 254765.13 00:25:19.766 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme6n1 : 0.70 183.92 11.50 0.00 0.00 296746.86 27767.85 285834.05 00:25:19.766 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme7n1 : 0.68 281.28 17.58 0.00 0.00 187284.80 14660.65 254765.13 00:25:19.766 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme8n1 : 0.68 186.92 11.68 0.00 0.00 272197.40 66409.81 211268.65 00:25:19.766 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme9n1 : 0.72 246.07 15.38 0.00 0.00 199040.78 18350.08 229910.00 00:25:19.766 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:19.766 Verification LBA range: start 0x0 length 0x400 00:25:19.766 Nvme10n1 : 0.69 185.25 11.58 0.00 0.00 258361.27 32234.00 246997.90 00:25:19.766 =================================================================================================================== 00:25:19.766 Total : 2264.94 141.56 0.00 0.00 243851.57 14660.65 285834.05 00:25:20.024 05:20:57 -- target/shutdown.sh@113 -- # sleep 1 00:25:20.954 05:20:58 -- target/shutdown.sh@114 -- # kill -0 1955313 00:25:20.954 05:20:58 -- target/shutdown.sh@116 -- # stoptarget 00:25:20.954 05:20:58 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:20.954 05:20:58 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:20.954 05:20:58 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:20.954 05:20:58 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:20.954 05:20:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:20.954 05:20:58 -- nvmf/common.sh@117 -- # sync 00:25:20.954 05:20:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.954 05:20:58 -- nvmf/common.sh@120 -- # set +e 00:25:20.954 05:20:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.954 05:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.954 rmmod nvme_tcp 00:25:20.954 rmmod nvme_fabrics 00:25:20.954 rmmod nvme_keyring 00:25:21.212 05:20:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.212 05:20:58 -- nvmf/common.sh@124 -- # set -e 00:25:21.212 05:20:58 -- nvmf/common.sh@125 -- # return 0 00:25:21.212 05:20:58 -- nvmf/common.sh@478 -- # '[' -n 1955313 ']' 00:25:21.212 05:20:58 -- nvmf/common.sh@479 -- # killprocess 1955313 00:25:21.212 05:20:58 -- common/autotest_common.sh@936 -- # '[' -z 1955313 ']' 00:25:21.212 05:20:58 -- common/autotest_common.sh@940 -- # kill -0 1955313 00:25:21.212 05:20:58 -- common/autotest_common.sh@941 -- # uname 00:25:21.212 05:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.212 05:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1955313 00:25:21.212 05:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:21.212 05:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:21.212 05:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1955313' 00:25:21.212 killing process with pid 1955313 00:25:21.212 05:20:58 -- common/autotest_common.sh@955 -- # kill 1955313 00:25:21.212 05:20:58 -- common/autotest_common.sh@960 -- # wait 1955313 00:25:21.778 05:20:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:21.778 05:20:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:21.778 05:20:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:21.778 05:20:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.778 05:20:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.778 05:20:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.778 05:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.778 05:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.678 05:21:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.678 00:25:23.678 real 0m7.387s 00:25:23.678 user 0m21.738s 00:25:23.678 sys 0m1.415s 00:25:23.678 05:21:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:23.678 05:21:00 -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 ************************************ 00:25:23.678 END TEST nvmf_shutdown_tc2 00:25:23.678 ************************************ 00:25:23.678 05:21:00 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:23.678 05:21:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:23.678 05:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.678 05:21:00 -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 ************************************ 00:25:23.678 START TEST nvmf_shutdown_tc3 00:25:23.678 ************************************ 00:25:23.678 05:21:00 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:25:23.678 05:21:00 -- target/shutdown.sh@121 -- # starttarget 00:25:23.678 05:21:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:23.678 05:21:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:23.678 05:21:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.678 05:21:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:23.678 05:21:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:23.678 05:21:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:23.678 05:21:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.678 05:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.678 05:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.678 05:21:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:23.678 05:21:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.678 05:21:00 -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 05:21:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:23.678 05:21:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.678 05:21:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.678 05:21:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.678 05:21:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.678 05:21:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.678 05:21:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.678 05:21:00 -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.678 05:21:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.678 05:21:00 -- nvmf/common.sh@296 -- # e810=() 00:25:23.678 05:21:00 -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.678 05:21:00 -- nvmf/common.sh@297 -- # x722=() 00:25:23.678 05:21:00 -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.678 05:21:00 -- nvmf/common.sh@298 -- # mlx=() 00:25:23.678 05:21:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.678 05:21:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.678 05:21:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.678 05:21:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.678 05:21:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.678 05:21:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.678 05:21:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:23.678 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:23.678 05:21:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.678 05:21:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:23.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:23.678 05:21:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.678 05:21:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.678 05:21:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.678 05:21:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.678 05:21:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:23.679 05:21:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.679 05:21:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:23.679 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:23.679 05:21:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.679 05:21:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.679 05:21:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.679 05:21:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:23.679 05:21:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.679 05:21:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:23.679 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:23.679 05:21:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.679 05:21:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:23.679 05:21:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:23.679 05:21:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:23.679 05:21:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:23.679 05:21:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:23.679 05:21:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.679 05:21:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.679 05:21:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.679 05:21:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.679 05:21:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.679 05:21:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.679 05:21:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.679 05:21:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.679 05:21:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.679 05:21:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.679 05:21:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.679 05:21:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.937 05:21:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.937 05:21:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.937 05:21:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.937 05:21:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.937 05:21:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.937 05:21:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.937 05:21:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.937 05:21:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:25:23.937 00:25:23.937 --- 10.0.0.2 ping statistics --- 00:25:23.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.937 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:23.937 05:21:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:23.937 00:25:23.937 --- 10.0.0.1 ping statistics --- 00:25:23.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.937 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:23.937 05:21:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.937 05:21:01 -- nvmf/common.sh@411 -- # return 0 00:25:23.937 05:21:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:23.937 05:21:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.937 05:21:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:23.937 05:21:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:23.937 05:21:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.937 05:21:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:23.937 05:21:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:23.937 05:21:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:23.937 05:21:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:23.937 05:21:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:23.937 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:23.937 05:21:01 -- nvmf/common.sh@470 -- # nvmfpid=1956457 00:25:23.937 05:21:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:23.937 05:21:01 -- nvmf/common.sh@471 -- # waitforlisten 1956457 00:25:23.937 05:21:01 -- common/autotest_common.sh@817 -- # '[' -z 1956457 ']' 00:25:23.937 05:21:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.937 05:21:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.937 05:21:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.937 05:21:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.937 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:23.937 [2024-04-24 05:21:01.152412] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:23.937 [2024-04-24 05:21:01.152516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.937 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.937 [2024-04-24 05:21:01.193072] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.195 [2024-04-24 05:21:01.222585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.195 [2024-04-24 05:21:01.310250] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.195 [2024-04-24 05:21:01.310318] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.195 [2024-04-24 05:21:01.310349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.195 [2024-04-24 05:21:01.310361] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.195 [2024-04-24 05:21:01.310372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.195 [2024-04-24 05:21:01.310520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.195 [2024-04-24 05:21:01.310579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.195 [2024-04-24 05:21:01.310640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:24.195 [2024-04-24 05:21:01.310641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.195 05:21:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.195 05:21:01 -- common/autotest_common.sh@850 -- # return 0 00:25:24.195 05:21:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:24.195 05:21:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:24.195 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 05:21:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.195 05:21:01 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:24.195 05:21:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.195 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 [2024-04-24 05:21:01.455227] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.196 05:21:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.196 05:21:01 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:24.196 05:21:01 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:24.196 05:21:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:24.196 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.454 05:21:01 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:24.454 05:21:01 -- target/shutdown.sh@28 -- # cat 00:25:24.454 05:21:01 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:24.454 05:21:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.454 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.454 Malloc1 00:25:24.454 [2024-04-24 05:21:01.534260] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.454 Malloc2 00:25:24.454 Malloc3 00:25:24.454 Malloc4 00:25:24.454 Malloc5 00:25:24.712 Malloc6 00:25:24.712 Malloc7 00:25:24.712 Malloc8 00:25:24.712 Malloc9 00:25:24.712 Malloc10 00:25:24.970 05:21:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.970 05:21:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:24.970 05:21:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:24.970 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.970 05:21:02 -- target/shutdown.sh@125 -- # perfpid=1956576 00:25:24.970 05:21:02 -- target/shutdown.sh@126 -- # waitforlisten 1956576 /var/tmp/bdevperf.sock 00:25:24.970 05:21:02 -- common/autotest_common.sh@817 -- # '[' -z 1956576 ']' 00:25:24.970 05:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.970 05:21:02 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:24.970 05:21:02 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:24.970 05:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.970 05:21:02 -- nvmf/common.sh@521 -- # config=() 00:25:24.970 05:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.970 05:21:02 -- nvmf/common.sh@521 -- # local subsystem config 00:25:24.970 05:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.970 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.970 05:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:24.970 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.970 { 00:25:24.970 "params": { 00:25:24.970 "name": "Nvme$subsystem", 00:25:24.970 "trtype": "$TEST_TRANSPORT", 00:25:24.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.970 "adrfam": "ipv4", 00:25:24.970 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:24.971 { 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme$subsystem", 00:25:24.971 "trtype": "$TEST_TRANSPORT", 00:25:24.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "$NVMF_PORT", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.971 "hdgst": ${hdgst:-false}, 00:25:24.971 "ddgst": ${ddgst:-false} 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 } 00:25:24.971 EOF 00:25:24.971 )") 00:25:24.971 05:21:02 -- nvmf/common.sh@543 -- # cat 00:25:24.971 05:21:02 -- nvmf/common.sh@545 -- # jq . 00:25:24.971 05:21:02 -- nvmf/common.sh@546 -- # IFS=, 00:25:24.971 05:21:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme1", 00:25:24.971 "trtype": "tcp", 00:25:24.971 "traddr": "10.0.0.2", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "4420", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.971 "hdgst": false, 00:25:24.971 "ddgst": false 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 },{ 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme2", 00:25:24.971 "trtype": "tcp", 00:25:24.971 "traddr": "10.0.0.2", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "4420", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:24.971 "hdgst": false, 00:25:24.971 "ddgst": false 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 },{ 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme3", 00:25:24.971 "trtype": "tcp", 00:25:24.971 "traddr": "10.0.0.2", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "4420", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:24.971 "hdgst": false, 00:25:24.971 "ddgst": false 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 },{ 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme4", 00:25:24.971 "trtype": "tcp", 00:25:24.971 "traddr": "10.0.0.2", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "4420", 00:25:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:24.971 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:24.971 "hdgst": false, 00:25:24.971 "ddgst": false 00:25:24.971 }, 00:25:24.971 "method": "bdev_nvme_attach_controller" 00:25:24.971 },{ 00:25:24.971 "params": { 00:25:24.971 "name": "Nvme5", 00:25:24.971 "trtype": "tcp", 00:25:24.971 "traddr": "10.0.0.2", 00:25:24.971 "adrfam": "ipv4", 00:25:24.971 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 },{ 00:25:24.972 "params": { 00:25:24.972 "name": "Nvme6", 00:25:24.972 "trtype": "tcp", 00:25:24.972 "traddr": "10.0.0.2", 00:25:24.972 "adrfam": "ipv4", 00:25:24.972 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 },{ 00:25:24.972 "params": { 00:25:24.972 "name": "Nvme7", 00:25:24.972 "trtype": "tcp", 00:25:24.972 "traddr": "10.0.0.2", 00:25:24.972 "adrfam": "ipv4", 00:25:24.972 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 },{ 00:25:24.972 "params": { 00:25:24.972 "name": "Nvme8", 00:25:24.972 "trtype": "tcp", 00:25:24.972 "traddr": "10.0.0.2", 00:25:24.972 "adrfam": "ipv4", 00:25:24.972 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 },{ 00:25:24.972 "params": { 00:25:24.972 "name": "Nvme9", 00:25:24.972 "trtype": "tcp", 00:25:24.972 "traddr": "10.0.0.2", 00:25:24.972 "adrfam": "ipv4", 00:25:24.972 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 },{ 00:25:24.972 "params": { 00:25:24.972 "name": "Nvme10", 00:25:24.972 "trtype": "tcp", 00:25:24.972 "traddr": "10.0.0.2", 00:25:24.972 "adrfam": "ipv4", 00:25:24.972 "trsvcid": "4420", 00:25:24.972 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:24.972 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:24.972 "hdgst": false, 00:25:24.972 "ddgst": false 00:25:24.972 }, 00:25:24.972 "method": "bdev_nvme_attach_controller" 00:25:24.972 }' 00:25:24.972 [2024-04-24 05:21:02.054742] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:24.972 [2024-04-24 05:21:02.054818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956576 ] 00:25:24.972 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.972 [2024-04-24 05:21:02.089908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.972 [2024-04-24 05:21:02.118985] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.972 [2024-04-24 05:21:02.205396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.877 Running I/O for 10 seconds... 00:25:27.159 05:21:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:27.159 05:21:04 -- common/autotest_common.sh@850 -- # return 0 00:25:27.159 05:21:04 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:27.159 05:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.159 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:25:27.159 05:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.159 05:21:04 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.159 05:21:04 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:27.159 05:21:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:27.159 05:21:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:27.159 05:21:04 -- target/shutdown.sh@57 -- # local ret=1 00:25:27.159 05:21:04 -- target/shutdown.sh@58 -- # local i 00:25:27.159 05:21:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:27.159 05:21:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:27.159 05:21:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:27.159 05:21:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:27.159 05:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.159 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:25:27.159 05:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.159 05:21:04 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:27.159 05:21:04 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:27.159 05:21:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:27.416 05:21:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:27.416 05:21:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:27.416 05:21:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:27.416 05:21:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:27.416 05:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.416 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:25:27.416 05:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.416 05:21:04 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:27.416 05:21:04 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:27.416 05:21:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:27.694 05:21:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:27.694 05:21:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:27.694 05:21:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:27.694 05:21:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:27.694 05:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.694 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:25:27.694 05:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.694 05:21:04 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:27.694 05:21:04 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:27.694 05:21:04 -- target/shutdown.sh@64 -- # ret=0 00:25:27.694 05:21:04 -- target/shutdown.sh@65 -- # break 00:25:27.694 05:21:04 -- target/shutdown.sh@69 -- # return 0 00:25:27.694 05:21:04 -- target/shutdown.sh@135 -- # killprocess 1956457 00:25:27.694 05:21:04 -- common/autotest_common.sh@936 -- # '[' -z 1956457 ']' 00:25:27.694 05:21:04 -- common/autotest_common.sh@940 -- # kill -0 1956457 00:25:27.694 05:21:04 -- common/autotest_common.sh@941 -- # uname 00:25:27.694 05:21:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.694 05:21:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1956457 00:25:27.694 05:21:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:27.694 05:21:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:27.694 05:21:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1956457' 00:25:27.694 killing process with pid 1956457 00:25:27.694 05:21:04 -- common/autotest_common.sh@955 -- # kill 1956457 00:25:27.694 05:21:04 -- common/autotest_common.sh@960 -- # wait 1956457 00:25:27.694 [2024-04-24 05:21:04.812390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.694 [2024-04-24 05:21:04.812479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.694 [2024-04-24 05:21:04.812511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.694 [2024-04-24 05:21:04.812525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.694 [2024-04-24 05:21:04.812538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.812999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.813359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ac90 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.695 [2024-04-24 05:21:04.814984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.814997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.815446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d5c0 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.816995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.696 [2024-04-24 05:21:04.817367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.817559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b120 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.819371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677b50 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.819673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3da0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.819899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.819983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.819998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.820027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816e00 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.820123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.820152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with [2024-04-24 05:21:04.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:27.697 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.820184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.697 [2024-04-24 05:21:04.820211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.697 [2024-04-24 05:21:04.820225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12393c0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.697 [2024-04-24 05:21:04.820542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.820979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b5b0 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.698 [2024-04-24 05:21:04.828737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.828879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ba40 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.829989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.699 [2024-04-24 05:21:04.830384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.830396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.830409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.830422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.830435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217bed0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.831511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c380 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.831538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c380 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.833998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.834275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217cca0 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.700 [2024-04-24 05:21:04.835235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.835881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d130 is same with the state(5) to be set 00:25:27.701 [2024-04-24 05:21:04.837787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.837856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.837888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.837917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.837945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.837974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.837987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.701 [2024-04-24 05:21:04.838311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.701 [2024-04-24 05:21:04.838326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.838976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.838989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.702 [2024-04-24 05:21:04.839472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.702 [2024-04-24 05:21:04.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.703 [2024-04-24 05:21:04.839731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.839776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.703 [2024-04-24 05:21:04.839860] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ab730 was disconnected and freed. reset controller. 00:25:27.703 [2024-04-24 05:21:04.840195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e450 is same with the state(5) to be set 00:25:27.703 [2024-04-24 05:21:04.840442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ba30 is same with the state(5) to be set 00:25:27.703 [2024-04-24 05:21:04.840606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1677b50 (9): Bad file descriptor 00:25:27.703 [2024-04-24 05:21:04.840649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3da0 (9): Bad file descriptor 00:25:27.703 [2024-04-24 05:21:04.840709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670800 is same with the state(5) to be set 00:25:27.703 [2024-04-24 05:21:04.840868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.840970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.840982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bdb0 is same with the state(5) to be set 00:25:27.703 [2024-04-24 05:21:04.841009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1816e00 (9): Bad file descriptor 00:25:27.703 [2024-04-24 05:21:04.841058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813230 is same with the state(5) to be set 00:25:27.703 [2024-04-24 05:21:04.841206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12393c0 (9): Bad file descriptor 00:25:27.703 [2024-04-24 05:21:04.841255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.703 [2024-04-24 05:21:04.841326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.703 [2024-04-24 05:21:04.841339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.704 [2024-04-24 05:21:04.841351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.841365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.704 [2024-04-24 05:21:04.841377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.841390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803a70 is same with the state(5) to be set 00:25:27.704 [2024-04-24 05:21:04.841883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.841908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.841929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.841944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.841969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.841983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.841998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.842981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.842996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.843009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.843024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.704 [2024-04-24 05:21:04.843038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.704 [2024-04-24 05:21:04.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.843865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.843966] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18aa280 was disconnected and freed. reset controller. 00:25:27.705 [2024-04-24 05:21:04.845277] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.845335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:27.705 [2024-04-24 05:21:04.845378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1813230 (9): Bad file descriptor 00:25:27.705 [2024-04-24 05:21:04.845484] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.845557] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.847064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:27.705 [2024-04-24 05:21:04.847116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1803a70 (9): Bad file descriptor 00:25:27.705 [2024-04-24 05:21:04.848142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.705 [2024-04-24 05:21:04.848292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.705 [2024-04-24 05:21:04.848322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1813230 with addr=10.0.0.2, port=4420 00:25:27.705 [2024-04-24 05:21:04.848339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813230 is same with the state(5) to be set 00:25:27.705 [2024-04-24 05:21:04.848426] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.848906] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.848990] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.849176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.705 [2024-04-24 05:21:04.849334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.705 [2024-04-24 05:21:04.849361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1803a70 with addr=10.0.0.2, port=4420 00:25:27.705 [2024-04-24 05:21:04.849382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803a70 is same with the state(5) to be set 00:25:27.705 [2024-04-24 05:21:04.849401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1813230 (9): Bad file descriptor 00:25:27.705 [2024-04-24 05:21:04.849551] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:27.705 [2024-04-24 05:21:04.849655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1803a70 (9): Bad file descriptor 00:25:27.705 [2024-04-24 05:21:04.849697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:27.705 [2024-04-24 05:21:04.849721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:27.705 [2024-04-24 05:21:04.849747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:27.705 [2024-04-24 05:21:04.849858] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.705 [2024-04-24 05:21:04.849882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:27.705 [2024-04-24 05:21:04.849895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:27.705 [2024-04-24 05:21:04.849908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:27.705 [2024-04-24 05:21:04.849983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.705 [2024-04-24 05:21:04.850006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.705 [2024-04-24 05:21:04.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.850974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.706 [2024-04-24 05:21:04.851239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.706 [2024-04-24 05:21:04.851254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.851967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.851982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7920 is same with the state(5) to be set 00:25:27.707 [2024-04-24 05:21:04.852072] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18a7920 was disconnected and freed. reset controller. 00:25:27.707 [2024-04-24 05:21:04.852123] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.707 [2024-04-24 05:21:04.852152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177e450 (9): Bad file descriptor 00:25:27.707 [2024-04-24 05:21:04.852192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ba30 (9): Bad file descriptor 00:25:27.707 [2024-04-24 05:21:04.852241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1670800 (9): Bad file descriptor 00:25:27.707 [2024-04-24 05:21:04.852269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168bdb0 (9): Bad file descriptor 00:25:27.707 [2024-04-24 05:21:04.853588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:27.707 [2024-04-24 05:21:04.853671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.853972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.853987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.854016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.854035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.854051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.707 [2024-04-24 05:21:04.854080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.707 [2024-04-24 05:21:04.854098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.854981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.854996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.855010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.855025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.855038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.855054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.855067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.855082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.708 [2024-04-24 05:21:04.855095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.708 [2024-04-24 05:21:04.855111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.855611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.855626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878e40 is same with the state(5) to be set 00:25:27.709 [2024-04-24 05:21:04.856901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.856923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.856948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.856963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.856979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.709 [2024-04-24 05:21:04.857607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.709 [2024-04-24 05:21:04.857621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.857980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.857998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.710 [2024-04-24 05:21:04.858828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.710 [2024-04-24 05:21:04.858841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.858856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.858870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.858884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4f70 is same with the state(5) to be set 00:25:27.711 [2024-04-24 05:21:04.860147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.860985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.860998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-04-24 05:21:04.861334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-04-24 05:21:04.861349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.861997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.862025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.862054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.862082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.862115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.862143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.862164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a63e0 is same with the state(5) to be set 00:25:27.712 [2024-04-24 05:21:04.863516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-04-24 05:21:04.863765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-04-24 05:21:04.863779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.863980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.863994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.864983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.864997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-04-24 05:21:04.865011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-04-24 05:21:04.865025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.865509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.865533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647ea0 is same with the state(5) to be set 00:25:27.714 [2024-04-24 05:21:04.867779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.714 [2024-04-24 05:21:04.867812] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:27.714 [2024-04-24 05:21:04.867840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:27.714 [2024-04-24 05:21:04.868252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.868399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.868435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1670800 with addr=10.0.0.2, port=4420 00:25:27.714 [2024-04-24 05:21:04.868451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670800 is same with the state(5) to be set 00:25:27.714 [2024-04-24 05:21:04.868534] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.714 [2024-04-24 05:21:04.868580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1670800 (9): Bad file descriptor 00:25:27.714 [2024-04-24 05:21:04.868959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:27.714 [2024-04-24 05:21:04.869137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.869268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.869292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12393c0 with addr=10.0.0.2, port=4420 00:25:27.714 [2024-04-24 05:21:04.869308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12393c0 is same with the state(5) to be set 00:25:27.714 [2024-04-24 05:21:04.869531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.869777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.869803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a3da0 with addr=10.0.0.2, port=4420 00:25:27.714 [2024-04-24 05:21:04.869819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3da0 is same with the state(5) to be set 00:25:27.714 [2024-04-24 05:21:04.869933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.870069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.714 [2024-04-24 05:21:04.870092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1816e00 with addr=10.0.0.2, port=4420 00:25:27.714 [2024-04-24 05:21:04.870107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816e00 is same with the state(5) to be set 00:25:27.714 [2024-04-24 05:21:04.870954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.870979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-04-24 05:21:04.871369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-04-24 05:21:04.871384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.871983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.871997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-04-24 05:21:04.872549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-04-24 05:21:04.872565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.872852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.872866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8dd0 is same with the state(5) to be set 00:25:27.716 [2024-04-24 05:21:04.874128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.874984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.716 [2024-04-24 05:21:04.874999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.716 [2024-04-24 05:21:04.875013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.875984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.875999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.717 [2024-04-24 05:21:04.876012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.717 [2024-04-24 05:21:04.876027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acbe0 is same with the state(5) to be set 00:25:27.718 [2024-04-24 05:21:04.877257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.877981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.877994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.718 [2024-04-24 05:21:04.878394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.718 [2024-04-24 05:21:04.878409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.878984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.878999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.879012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.879027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.879041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.879056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.879070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.879085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.879098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.879113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.719 [2024-04-24 05:21:04.879126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.719 [2024-04-24 05:21:04.879140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646a30 is same with the state(5) to be set 00:25:27.719 [2024-04-24 05:21:04.881049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:27.719 [2024-04-24 05:21:04.881083] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:27.719 [2024-04-24 05:21:04.881102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:27.719 [2024-04-24 05:21:04.881120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:27.719 task offset: 24704 on job bdev=Nvme7n1 fails 00:25:27.719 00:25:27.719 Latency(us) 00:25:27.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme1n1 ended in about 0.91 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme1n1 : 0.91 141.01 8.81 70.51 0.00 299152.62 26991.12 262532.36 00:25:27.719 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme2n1 ended in about 0.91 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme2n1 : 0.91 140.51 8.78 70.25 0.00 294159.80 21359.88 265639.25 00:25:27.719 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme3n1 ended in about 0.91 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme3n1 : 0.91 140.01 8.75 70.00 0.00 289036.01 17961.72 259425.47 00:25:27.719 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme4n1 ended in about 0.90 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme4n1 : 0.90 212.28 13.27 70.76 0.00 209765.17 10922.67 256318.58 00:25:27.719 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme5n1 ended in about 0.92 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme5n1 : 0.92 138.39 8.65 69.20 0.00 280687.00 17864.63 287387.50 00:25:27.719 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme6n1 ended in about 0.90 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme6n1 : 0.90 213.84 13.37 71.28 0.00 199162.22 8495.41 243891.01 00:25:27.719 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme7n1 ended in about 0.90 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme7n1 : 0.90 214.28 13.39 71.43 0.00 194293.10 6505.05 248551.35 00:25:27.719 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme8n1 ended in about 0.93 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme8n1 : 0.93 137.92 8.62 68.96 0.00 263867.10 16019.91 245444.46 00:25:27.719 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme9n1 ended in about 0.93 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme9n1 : 0.93 137.46 8.59 68.73 0.00 259192.86 21748.24 276513.37 00:25:27.719 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:27.719 Job: Nvme10n1 ended in about 0.92 seconds with error 00:25:27.719 Verification LBA range: start 0x0 length 0x400 00:25:27.719 Nvme10n1 : 0.92 139.49 8.72 69.75 0.00 248914.43 19612.25 288940.94 00:25:27.719 =================================================================================================================== 00:25:27.719 Total : 1615.21 100.95 700.87 0.00 249027.62 6505.05 288940.94 00:25:27.719 [2024-04-24 05:21:04.907565] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:27.719 [2024-04-24 05:21:04.907699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:27.720 [2024-04-24 05:21:04.908050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.908211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.908238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1677b50 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.908258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677b50 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.908286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12393c0 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.908310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3da0 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.908339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1816e00 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.908357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.908370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.908387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:27.720 [2024-04-24 05:21:04.908457] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.908481] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.908500] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.908518] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.908539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1677b50 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.908713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.908878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1813230 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.909052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813230 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.909197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1803a70 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.909365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803a70 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.909477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168bdb0 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.909648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bdb0 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.909765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.909921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177e450 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.909936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e450 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.910059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.910173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.910197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178ba30 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.910213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ba30 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.910229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.910247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.910260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.720 [2024-04-24 05:21:04.910279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.910293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.910305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:27.720 [2024-04-24 05:21:04.910321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.910334] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.910347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:27.720 [2024-04-24 05:21:04.910386] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.910409] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.910427] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.910445] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:27.720 [2024-04-24 05:21:04.911315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.911338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.911351] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.911372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1813230 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.911392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1803a70 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.911409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168bdb0 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.911425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177e450 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.911442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ba30 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.911456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.911468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.911481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:27.720 [2024-04-24 05:21:04.911840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:27.720 [2024-04-24 05:21:04.911869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.911894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.911910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.911923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:27.720 [2024-04-24 05:21:04.911939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.911953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.911970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:27.720 [2024-04-24 05:21:04.911986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.911999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.912011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:27.720 [2024-04-24 05:21:04.912027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.912040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.912052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:27.720 [2024-04-24 05:21:04.912067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.912079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.912091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:27.720 [2024-04-24 05:21:04.912462] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.912485] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.912497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.912508] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.912519] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.720 [2024-04-24 05:21:04.912648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.912773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.720 [2024-04-24 05:21:04.912799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1670800 with addr=10.0.0.2, port=4420 00:25:27.720 [2024-04-24 05:21:04.912815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670800 is same with the state(5) to be set 00:25:27.720 [2024-04-24 05:21:04.912857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1670800 (9): Bad file descriptor 00:25:27.720 [2024-04-24 05:21:04.912915] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:27.720 [2024-04-24 05:21:04.912936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:27.720 [2024-04-24 05:21:04.912950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:27.720 [2024-04-24 05:21:04.912987] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.289 05:21:05 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:28.289 05:21:05 -- target/shutdown.sh@139 -- # sleep 1 00:25:29.226 05:21:06 -- target/shutdown.sh@142 -- # kill -9 1956576 00:25:29.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1956576) - No such process 00:25:29.226 05:21:06 -- target/shutdown.sh@142 -- # true 00:25:29.226 05:21:06 -- target/shutdown.sh@144 -- # stoptarget 00:25:29.226 05:21:06 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:29.226 05:21:06 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:29.226 05:21:06 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:29.226 05:21:06 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:29.226 05:21:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:29.226 05:21:06 -- nvmf/common.sh@117 -- # sync 00:25:29.226 05:21:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.226 05:21:06 -- nvmf/common.sh@120 -- # set +e 00:25:29.226 05:21:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.226 05:21:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.226 rmmod nvme_tcp 00:25:29.226 rmmod nvme_fabrics 00:25:29.226 rmmod nvme_keyring 00:25:29.226 05:21:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.226 05:21:06 -- nvmf/common.sh@124 -- # set -e 00:25:29.226 05:21:06 -- nvmf/common.sh@125 -- # return 0 00:25:29.226 05:21:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:29.226 05:21:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:29.226 05:21:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:29.226 05:21:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:29.226 05:21:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.226 05:21:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.226 05:21:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.226 05:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.226 05:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.761 05:21:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.761 00:25:31.761 real 0m7.574s 00:25:31.761 user 0m18.861s 00:25:31.761 sys 0m1.402s 00:25:31.761 05:21:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:31.761 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.761 ************************************ 00:25:31.761 END TEST nvmf_shutdown_tc3 00:25:31.761 ************************************ 00:25:31.761 05:21:08 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:31.761 00:25:31.761 real 0m27.365s 00:25:31.761 user 1m15.969s 00:25:31.761 sys 0m6.221s 00:25:31.761 05:21:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:31.761 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.761 ************************************ 00:25:31.761 END TEST nvmf_shutdown 00:25:31.761 ************************************ 00:25:31.761 05:21:08 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:31.761 05:21:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:31.761 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.761 05:21:08 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:31.761 05:21:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:31.761 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.761 05:21:08 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:25:31.761 05:21:08 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:31.761 05:21:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:31.761 05:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.761 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.761 ************************************ 00:25:31.761 START TEST nvmf_multicontroller 00:25:31.761 ************************************ 00:25:31.761 05:21:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:31.761 * Looking for test storage... 00:25:31.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.761 05:21:08 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.761 05:21:08 -- nvmf/common.sh@7 -- # uname -s 00:25:31.761 05:21:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.761 05:21:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.761 05:21:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.761 05:21:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.761 05:21:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.761 05:21:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.761 05:21:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.761 05:21:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.761 05:21:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.761 05:21:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.761 05:21:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.761 05:21:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.761 05:21:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.761 05:21:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.761 05:21:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.761 05:21:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.761 05:21:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.761 05:21:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.761 05:21:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.761 05:21:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.761 05:21:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.761 05:21:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.761 05:21:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.761 05:21:08 -- paths/export.sh@5 -- # export PATH 00:25:31.761 05:21:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.761 05:21:08 -- nvmf/common.sh@47 -- # : 0 00:25:31.761 05:21:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.761 05:21:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.761 05:21:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.762 05:21:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.762 05:21:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.762 05:21:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.762 05:21:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.762 05:21:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.762 05:21:08 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.762 05:21:08 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.762 05:21:08 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:31.762 05:21:08 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:31.762 05:21:08 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.762 05:21:08 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:31.762 05:21:08 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:31.762 05:21:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:31.762 05:21:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.762 05:21:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:31.762 05:21:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:31.762 05:21:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:31.762 05:21:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.762 05:21:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.762 05:21:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.762 05:21:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:31.762 05:21:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:31.762 05:21:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.762 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:25:33.665 05:21:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:33.665 05:21:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.665 05:21:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.665 05:21:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.665 05:21:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.665 05:21:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.665 05:21:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.665 05:21:10 -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.665 05:21:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.665 05:21:10 -- nvmf/common.sh@296 -- # e810=() 00:25:33.665 05:21:10 -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.665 05:21:10 -- nvmf/common.sh@297 -- # x722=() 00:25:33.665 05:21:10 -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.665 05:21:10 -- nvmf/common.sh@298 -- # mlx=() 00:25:33.665 05:21:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.665 05:21:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.665 05:21:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.665 05:21:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.665 05:21:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.665 05:21:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.665 05:21:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.665 05:21:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.665 05:21:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.666 05:21:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:33.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:33.666 05:21:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.666 05:21:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:33.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:33.666 05:21:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.666 05:21:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.666 05:21:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.666 05:21:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:33.666 05:21:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.666 05:21:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:33.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:33.666 05:21:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.666 05:21:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.666 05:21:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.666 05:21:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:33.666 05:21:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.666 05:21:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:33.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:33.666 05:21:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.666 05:21:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:33.666 05:21:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:33.666 05:21:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:33.666 05:21:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.666 05:21:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.666 05:21:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.666 05:21:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.666 05:21:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.666 05:21:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.666 05:21:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.666 05:21:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.666 05:21:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.666 05:21:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.666 05:21:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.666 05:21:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.666 05:21:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.666 05:21:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.666 05:21:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.666 05:21:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.666 05:21:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.666 05:21:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.666 05:21:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.666 05:21:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:25:33.666 00:25:33.666 --- 10.0.0.2 ping statistics --- 00:25:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.666 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:25:33.666 05:21:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:25:33.666 00:25:33.666 --- 10.0.0.1 ping statistics --- 00:25:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.666 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:33.666 05:21:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.666 05:21:10 -- nvmf/common.sh@411 -- # return 0 00:25:33.666 05:21:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:33.666 05:21:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.666 05:21:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:33.666 05:21:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.666 05:21:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:33.666 05:21:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:33.666 05:21:10 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:33.666 05:21:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:33.666 05:21:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:33.666 05:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:33.666 05:21:10 -- nvmf/common.sh@470 -- # nvmfpid=1959615 00:25:33.666 05:21:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:33.666 05:21:10 -- nvmf/common.sh@471 -- # waitforlisten 1959615 00:25:33.666 05:21:10 -- common/autotest_common.sh@817 -- # '[' -z 1959615 ']' 00:25:33.666 05:21:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.666 05:21:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:33.666 05:21:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.666 05:21:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:33.666 05:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:33.666 [2024-04-24 05:21:10.846706] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:33.666 [2024-04-24 05:21:10.846799] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.666 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.666 [2024-04-24 05:21:10.891576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:33.666 [2024-04-24 05:21:10.918817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.925 [2024-04-24 05:21:11.003752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.925 [2024-04-24 05:21:11.003814] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.925 [2024-04-24 05:21:11.003844] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.925 [2024-04-24 05:21:11.003856] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.925 [2024-04-24 05:21:11.003867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.925 [2024-04-24 05:21:11.003960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.925 [2024-04-24 05:21:11.004010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.925 [2024-04-24 05:21:11.004012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.925 05:21:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.925 05:21:11 -- common/autotest_common.sh@850 -- # return 0 00:25:33.925 05:21:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:33.925 05:21:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:33.925 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:33.925 05:21:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.925 05:21:11 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.925 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.925 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:33.925 [2024-04-24 05:21:11.153240] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.925 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.925 05:21:11 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.925 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.925 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 Malloc0 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 [2024-04-24 05:21:11.220579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 [2024-04-24 05:21:11.228481] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 Malloc1 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:34.184 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.184 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.184 05:21:11 -- host/multicontroller.sh@44 -- # bdevperf_pid=1959743 00:25:34.184 05:21:11 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:34.184 05:21:11 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.184 05:21:11 -- host/multicontroller.sh@47 -- # waitforlisten 1959743 /var/tmp/bdevperf.sock 00:25:34.184 05:21:11 -- common/autotest_common.sh@817 -- # '[' -z 1959743 ']' 00:25:34.184 05:21:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.184 05:21:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:34.184 05:21:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.184 05:21:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:34.184 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.443 05:21:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:34.443 05:21:11 -- common/autotest_common.sh@850 -- # return 0 00:25:34.443 05:21:11 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:34.443 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.443 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.443 NVMe0n1 00:25:34.443 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.443 05:21:11 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.443 05:21:11 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:34.443 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.443 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.443 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.443 1 00:25:34.443 05:21:11 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:34.443 05:21:11 -- common/autotest_common.sh@638 -- # local es=0 00:25:34.443 05:21:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:34.443 05:21:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.443 05:21:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:34.443 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.443 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.443 request: 00:25:34.443 { 00:25:34.443 "name": "NVMe0", 00:25:34.443 "trtype": "tcp", 00:25:34.443 "traddr": "10.0.0.2", 00:25:34.443 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:34.443 "hostaddr": "10.0.0.2", 00:25:34.443 "hostsvcid": "60000", 00:25:34.443 "adrfam": "ipv4", 00:25:34.443 "trsvcid": "4420", 00:25:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.443 "method": "bdev_nvme_attach_controller", 00:25:34.443 "req_id": 1 00:25:34.443 } 00:25:34.443 Got JSON-RPC error response 00:25:34.443 response: 00:25:34.443 { 00:25:34.443 "code": -114, 00:25:34.443 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:34.443 } 00:25:34.443 05:21:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:34.443 05:21:11 -- common/autotest_common.sh@641 -- # es=1 00:25:34.443 05:21:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:34.443 05:21:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:34.443 05:21:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:34.443 05:21:11 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:34.443 05:21:11 -- common/autotest_common.sh@638 -- # local es=0 00:25:34.443 05:21:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:34.443 05:21:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:34.443 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.443 05:21:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:34.443 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.443 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.443 request: 00:25:34.443 { 00:25:34.443 "name": "NVMe0", 00:25:34.443 "trtype": "tcp", 00:25:34.443 "traddr": "10.0.0.2", 00:25:34.443 "hostaddr": "10.0.0.2", 00:25:34.443 "hostsvcid": "60000", 00:25:34.443 "adrfam": "ipv4", 00:25:34.443 "trsvcid": "4420", 00:25:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:34.443 "method": "bdev_nvme_attach_controller", 00:25:34.443 "req_id": 1 00:25:34.443 } 00:25:34.443 Got JSON-RPC error response 00:25:34.443 response: 00:25:34.443 { 00:25:34.443 "code": -114, 00:25:34.443 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:34.444 } 00:25:34.444 05:21:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:34.444 05:21:11 -- common/autotest_common.sh@641 -- # es=1 00:25:34.444 05:21:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:34.444 05:21:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:34.444 05:21:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:34.444 05:21:11 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:34.444 05:21:11 -- common/autotest_common.sh@638 -- # local es=0 00:25:34.444 05:21:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:34.444 05:21:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:34.444 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.444 05:21:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:34.444 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.444 05:21:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:34.444 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.444 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.444 request: 00:25:34.444 { 00:25:34.444 "name": "NVMe0", 00:25:34.444 "trtype": "tcp", 00:25:34.444 "traddr": "10.0.0.2", 00:25:34.444 "hostaddr": "10.0.0.2", 00:25:34.444 "hostsvcid": "60000", 00:25:34.444 "adrfam": "ipv4", 00:25:34.444 "trsvcid": "4420", 00:25:34.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.444 "multipath": "disable", 00:25:34.702 "method": "bdev_nvme_attach_controller", 00:25:34.702 "req_id": 1 00:25:34.702 } 00:25:34.702 Got JSON-RPC error response 00:25:34.702 response: 00:25:34.702 { 00:25:34.702 "code": -114, 00:25:34.702 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:34.702 } 00:25:34.702 05:21:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:34.702 05:21:11 -- common/autotest_common.sh@641 -- # es=1 00:25:34.702 05:21:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:34.702 05:21:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:34.702 05:21:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:34.702 05:21:11 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:34.702 05:21:11 -- common/autotest_common.sh@638 -- # local es=0 00:25:34.702 05:21:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:34.702 05:21:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:34.702 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.702 05:21:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:34.702 05:21:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:34.702 05:21:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:34.702 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.702 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.702 request: 00:25:34.702 { 00:25:34.702 "name": "NVMe0", 00:25:34.702 "trtype": "tcp", 00:25:34.702 "traddr": "10.0.0.2", 00:25:34.702 "hostaddr": "10.0.0.2", 00:25:34.702 "hostsvcid": "60000", 00:25:34.702 "adrfam": "ipv4", 00:25:34.702 "trsvcid": "4420", 00:25:34.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.702 "multipath": "failover", 00:25:34.702 "method": "bdev_nvme_attach_controller", 00:25:34.702 "req_id": 1 00:25:34.702 } 00:25:34.702 Got JSON-RPC error response 00:25:34.702 response: 00:25:34.702 { 00:25:34.702 "code": -114, 00:25:34.702 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:34.702 } 00:25:34.702 05:21:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:34.702 05:21:11 -- common/autotest_common.sh@641 -- # es=1 00:25:34.702 05:21:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:34.703 05:21:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:34.703 05:21:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:34.703 05:21:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.703 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.703 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 00:25:34.703 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.703 05:21:11 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.703 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.703 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.703 05:21:11 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:34.703 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.703 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 00:25:34.703 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.703 05:21:11 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.703 05:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.703 05:21:11 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:34.703 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:34.703 05:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.703 05:21:11 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:34.703 05:21:11 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.095 0 00:25:36.095 05:21:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:36.095 05:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.095 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:36.095 05:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.095 05:21:13 -- host/multicontroller.sh@100 -- # killprocess 1959743 00:25:36.095 05:21:13 -- common/autotest_common.sh@936 -- # '[' -z 1959743 ']' 00:25:36.095 05:21:13 -- common/autotest_common.sh@940 -- # kill -0 1959743 00:25:36.095 05:21:13 -- common/autotest_common.sh@941 -- # uname 00:25:36.095 05:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.095 05:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1959743 00:25:36.095 05:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:36.095 05:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:36.095 05:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1959743' 00:25:36.095 killing process with pid 1959743 00:25:36.095 05:21:13 -- common/autotest_common.sh@955 -- # kill 1959743 00:25:36.095 05:21:13 -- common/autotest_common.sh@960 -- # wait 1959743 00:25:36.095 05:21:13 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.095 05:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.095 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:36.095 05:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.095 05:21:13 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:36.095 05:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.095 05:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:36.095 05:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.095 05:21:13 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:36.095 05:21:13 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.095 05:21:13 -- common/autotest_common.sh@1598 -- # read -r file 00:25:36.095 05:21:13 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:36.095 05:21:13 -- common/autotest_common.sh@1597 -- # sort -u 00:25:36.095 05:21:13 -- common/autotest_common.sh@1599 -- # cat 00:25:36.095 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:36.095 [2024-04-24 05:21:11.336845] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:36.095 [2024-04-24 05:21:11.336956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959743 ] 00:25:36.095 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.095 [2024-04-24 05:21:11.370281] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:36.095 [2024-04-24 05:21:11.401606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.095 [2024-04-24 05:21:11.486701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.095 [2024-04-24 05:21:11.907468] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 1a7cdb8a-7bc6-4012-b966-fe7d12fc497a already exists 00:25:36.095 [2024-04-24 05:21:11.907507] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:1a7cdb8a-7bc6-4012-b966-fe7d12fc497a alias for bdev NVMe1n1 00:25:36.095 [2024-04-24 05:21:11.907540] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:36.095 Running I/O for 1 seconds... 00:25:36.095 00:25:36.095 Latency(us) 00:25:36.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.095 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:36.095 NVMe0n1 : 1.01 19200.52 75.00 0.00 0.00 6656.49 2063.17 11942.12 00:25:36.095 =================================================================================================================== 00:25:36.095 Total : 19200.52 75.00 0.00 0.00 6656.49 2063.17 11942.12 00:25:36.095 Received shutdown signal, test time was about 1.000000 seconds 00:25:36.095 00:25:36.096 Latency(us) 00:25:36.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.096 =================================================================================================================== 00:25:36.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.096 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:36.096 05:21:13 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.096 05:21:13 -- common/autotest_common.sh@1598 -- # read -r file 00:25:36.096 05:21:13 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:36.096 05:21:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:36.096 05:21:13 -- nvmf/common.sh@117 -- # sync 00:25:36.096 05:21:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.096 05:21:13 -- nvmf/common.sh@120 -- # set +e 00:25:36.096 05:21:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.096 05:21:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.096 rmmod nvme_tcp 00:25:36.096 rmmod nvme_fabrics 00:25:36.353 rmmod nvme_keyring 00:25:36.354 05:21:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.354 05:21:13 -- nvmf/common.sh@124 -- # set -e 00:25:36.354 05:21:13 -- nvmf/common.sh@125 -- # return 0 00:25:36.354 05:21:13 -- nvmf/common.sh@478 -- # '[' -n 1959615 ']' 00:25:36.354 05:21:13 -- nvmf/common.sh@479 -- # killprocess 1959615 00:25:36.354 05:21:13 -- common/autotest_common.sh@936 -- # '[' -z 1959615 ']' 00:25:36.354 05:21:13 -- common/autotest_common.sh@940 -- # kill -0 1959615 00:25:36.354 05:21:13 -- common/autotest_common.sh@941 -- # uname 00:25:36.354 05:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.354 05:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1959615 00:25:36.354 05:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:36.354 05:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:36.354 05:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1959615' 00:25:36.354 killing process with pid 1959615 00:25:36.354 05:21:13 -- common/autotest_common.sh@955 -- # kill 1959615 00:25:36.354 05:21:13 -- common/autotest_common.sh@960 -- # wait 1959615 00:25:36.611 05:21:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:36.611 05:21:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:36.611 05:21:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:36.611 05:21:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.611 05:21:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.611 05:21:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.611 05:21:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.611 05:21:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.516 05:21:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.516 00:25:38.516 real 0m7.051s 00:25:38.516 user 0m10.665s 00:25:38.516 sys 0m2.190s 00:25:38.516 05:21:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:38.516 05:21:15 -- common/autotest_common.sh@10 -- # set +x 00:25:38.516 ************************************ 00:25:38.516 END TEST nvmf_multicontroller 00:25:38.516 ************************************ 00:25:38.516 05:21:15 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:38.516 05:21:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:38.516 05:21:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.516 05:21:15 -- common/autotest_common.sh@10 -- # set +x 00:25:38.775 ************************************ 00:25:38.775 START TEST nvmf_aer 00:25:38.775 ************************************ 00:25:38.775 05:21:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:38.775 * Looking for test storage... 00:25:38.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.775 05:21:15 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.775 05:21:15 -- nvmf/common.sh@7 -- # uname -s 00:25:38.775 05:21:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.775 05:21:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.775 05:21:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.775 05:21:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.775 05:21:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.775 05:21:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.775 05:21:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.775 05:21:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.775 05:21:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.775 05:21:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.775 05:21:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.775 05:21:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.775 05:21:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.775 05:21:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.775 05:21:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.775 05:21:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.775 05:21:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.775 05:21:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.775 05:21:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.775 05:21:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.775 05:21:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.775 05:21:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.775 05:21:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.775 05:21:15 -- paths/export.sh@5 -- # export PATH 00:25:38.775 05:21:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.775 05:21:15 -- nvmf/common.sh@47 -- # : 0 00:25:38.775 05:21:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.775 05:21:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.775 05:21:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.775 05:21:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.775 05:21:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.775 05:21:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.775 05:21:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.775 05:21:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.775 05:21:15 -- host/aer.sh@11 -- # nvmftestinit 00:25:38.775 05:21:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:38.775 05:21:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.775 05:21:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:38.775 05:21:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:38.775 05:21:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:38.775 05:21:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.775 05:21:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.775 05:21:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.775 05:21:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:38.775 05:21:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:38.775 05:21:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.775 05:21:15 -- common/autotest_common.sh@10 -- # set +x 00:25:40.691 05:21:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.691 05:21:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.692 05:21:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.692 05:21:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.692 05:21:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.692 05:21:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.692 05:21:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.692 05:21:17 -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.692 05:21:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.692 05:21:17 -- nvmf/common.sh@296 -- # e810=() 00:25:40.692 05:21:17 -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.692 05:21:17 -- nvmf/common.sh@297 -- # x722=() 00:25:40.692 05:21:17 -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.692 05:21:17 -- nvmf/common.sh@298 -- # mlx=() 00:25:40.692 05:21:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.692 05:21:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.692 05:21:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.692 05:21:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.692 05:21:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.692 05:21:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:40.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:40.692 05:21:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.692 05:21:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:40.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:40.692 05:21:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.692 05:21:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.692 05:21:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.692 05:21:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:40.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:40.692 05:21:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.692 05:21:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.692 05:21:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.692 05:21:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.692 05:21:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:40.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:40.692 05:21:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.692 05:21:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:40.692 05:21:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:40.692 05:21:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:40.692 05:21:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.692 05:21:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.692 05:21:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.692 05:21:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.692 05:21:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.692 05:21:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.692 05:21:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.692 05:21:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.692 05:21:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.692 05:21:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.692 05:21:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.692 05:21:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.692 05:21:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.952 05:21:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.952 05:21:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.952 05:21:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.952 05:21:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.952 05:21:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.952 05:21:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.952 05:21:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:25:40.952 00:25:40.952 --- 10.0.0.2 ping statistics --- 00:25:40.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.952 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:40.952 05:21:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:40.952 00:25:40.952 --- 10.0.0.1 ping statistics --- 00:25:40.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.952 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:40.952 05:21:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.952 05:21:18 -- nvmf/common.sh@411 -- # return 0 00:25:40.952 05:21:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:40.952 05:21:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.952 05:21:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:40.952 05:21:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:40.952 05:21:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.952 05:21:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:40.952 05:21:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:40.952 05:21:18 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:40.952 05:21:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:40.952 05:21:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:40.952 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:40.952 05:21:18 -- nvmf/common.sh@470 -- # nvmfpid=1961971 00:25:40.952 05:21:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:40.952 05:21:18 -- nvmf/common.sh@471 -- # waitforlisten 1961971 00:25:40.952 05:21:18 -- common/autotest_common.sh@817 -- # '[' -z 1961971 ']' 00:25:40.952 05:21:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.952 05:21:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:40.952 05:21:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.952 05:21:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:40.952 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:40.952 [2024-04-24 05:21:18.162045] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:40.952 [2024-04-24 05:21:18.162142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.952 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.952 [2024-04-24 05:21:18.207606] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:41.212 [2024-04-24 05:21:18.236051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.212 [2024-04-24 05:21:18.323431] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.212 [2024-04-24 05:21:18.323500] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.212 [2024-04-24 05:21:18.323529] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.212 [2024-04-24 05:21:18.323540] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.212 [2024-04-24 05:21:18.323550] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.212 [2024-04-24 05:21:18.323610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.212 [2024-04-24 05:21:18.323729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.212 [2024-04-24 05:21:18.323756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.212 [2024-04-24 05:21:18.323759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.212 05:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.212 05:21:18 -- common/autotest_common.sh@850 -- # return 0 00:25:41.212 05:21:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:41.212 05:21:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:41.212 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.212 05:21:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.212 05:21:18 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.212 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.212 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.212 [2024-04-24 05:21:18.476470] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.470 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.470 05:21:18 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:41.470 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.470 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.470 Malloc0 00:25:41.470 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.470 05:21:18 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:41.471 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.471 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.471 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.471 05:21:18 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.471 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.471 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.471 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.471 05:21:18 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.471 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.471 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.471 [2024-04-24 05:21:18.530057] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.471 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.471 05:21:18 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:41.471 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.471 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.471 [2024-04-24 05:21:18.537772] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:41.471 [ 00:25:41.471 { 00:25:41.471 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.471 "subtype": "Discovery", 00:25:41.471 "listen_addresses": [], 00:25:41.471 "allow_any_host": true, 00:25:41.471 "hosts": [] 00:25:41.471 }, 00:25:41.471 { 00:25:41.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.471 "subtype": "NVMe", 00:25:41.471 "listen_addresses": [ 00:25:41.471 { 00:25:41.471 "transport": "TCP", 00:25:41.471 "trtype": "TCP", 00:25:41.471 "adrfam": "IPv4", 00:25:41.471 "traddr": "10.0.0.2", 00:25:41.471 "trsvcid": "4420" 00:25:41.471 } 00:25:41.471 ], 00:25:41.471 "allow_any_host": true, 00:25:41.471 "hosts": [], 00:25:41.471 "serial_number": "SPDK00000000000001", 00:25:41.471 "model_number": "SPDK bdev Controller", 00:25:41.471 "max_namespaces": 2, 00:25:41.471 "min_cntlid": 1, 00:25:41.471 "max_cntlid": 65519, 00:25:41.471 "namespaces": [ 00:25:41.471 { 00:25:41.471 "nsid": 1, 00:25:41.471 "bdev_name": "Malloc0", 00:25:41.471 "name": "Malloc0", 00:25:41.471 "nguid": "F563DCA1EAF5409FAB7DDBC1CDBE3375", 00:25:41.471 "uuid": "f563dca1-eaf5-409f-ab7d-dbc1cdbe3375" 00:25:41.471 } 00:25:41.471 ] 00:25:41.471 } 00:25:41.471 ] 00:25:41.471 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.471 05:21:18 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:41.471 05:21:18 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:41.471 05:21:18 -- host/aer.sh@33 -- # aerpid=1962004 00:25:41.471 05:21:18 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:41.471 05:21:18 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:41.471 05:21:18 -- common/autotest_common.sh@1251 -- # local i=0 00:25:41.471 05:21:18 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.471 05:21:18 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:25:41.471 05:21:18 -- common/autotest_common.sh@1254 -- # i=1 00:25:41.471 05:21:18 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:41.471 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.471 05:21:18 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.471 05:21:18 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:25:41.471 05:21:18 -- common/autotest_common.sh@1254 -- # i=2 00:25:41.471 05:21:18 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:41.729 05:21:18 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.729 05:21:18 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.729 05:21:18 -- common/autotest_common.sh@1262 -- # return 0 00:25:41.729 05:21:18 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 Malloc1 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 Asynchronous Event Request test 00:25:41.729 Attaching to 10.0.0.2 00:25:41.729 Attached to 10.0.0.2 00:25:41.729 Registering asynchronous event callbacks... 00:25:41.729 Starting namespace attribute notice tests for all controllers... 00:25:41.729 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:41.729 aer_cb - Changed Namespace 00:25:41.729 Cleaning up... 00:25:41.729 [ 00:25:41.729 { 00:25:41.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.729 "subtype": "Discovery", 00:25:41.729 "listen_addresses": [], 00:25:41.729 "allow_any_host": true, 00:25:41.729 "hosts": [] 00:25:41.729 }, 00:25:41.729 { 00:25:41.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.729 "subtype": "NVMe", 00:25:41.729 "listen_addresses": [ 00:25:41.729 { 00:25:41.729 "transport": "TCP", 00:25:41.729 "trtype": "TCP", 00:25:41.729 "adrfam": "IPv4", 00:25:41.729 "traddr": "10.0.0.2", 00:25:41.729 "trsvcid": "4420" 00:25:41.729 } 00:25:41.729 ], 00:25:41.729 "allow_any_host": true, 00:25:41.729 "hosts": [], 00:25:41.729 "serial_number": "SPDK00000000000001", 00:25:41.729 "model_number": "SPDK bdev Controller", 00:25:41.729 "max_namespaces": 2, 00:25:41.729 "min_cntlid": 1, 00:25:41.729 "max_cntlid": 65519, 00:25:41.729 "namespaces": [ 00:25:41.729 { 00:25:41.729 "nsid": 1, 00:25:41.729 "bdev_name": "Malloc0", 00:25:41.729 "name": "Malloc0", 00:25:41.729 "nguid": "F563DCA1EAF5409FAB7DDBC1CDBE3375", 00:25:41.729 "uuid": "f563dca1-eaf5-409f-ab7d-dbc1cdbe3375" 00:25:41.729 }, 00:25:41.729 { 00:25:41.729 "nsid": 2, 00:25:41.729 "bdev_name": "Malloc1", 00:25:41.729 "name": "Malloc1", 00:25:41.729 "nguid": "FF2C9686A6DF4ED9885DA50269766318", 00:25:41.729 "uuid": "ff2c9686-a6df-4ed9-885d-a50269766318" 00:25:41.729 } 00:25:41.729 ] 00:25:41.729 } 00:25:41.729 ] 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@43 -- # wait 1962004 00:25:41.729 05:21:18 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.729 05:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.729 05:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 05:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.729 05:21:18 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:41.729 05:21:18 -- host/aer.sh@51 -- # nvmftestfini 00:25:41.729 05:21:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:41.729 05:21:18 -- nvmf/common.sh@117 -- # sync 00:25:41.729 05:21:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.729 05:21:18 -- nvmf/common.sh@120 -- # set +e 00:25:41.729 05:21:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.729 05:21:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.729 rmmod nvme_tcp 00:25:41.729 rmmod nvme_fabrics 00:25:41.729 rmmod nvme_keyring 00:25:41.729 05:21:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.729 05:21:18 -- nvmf/common.sh@124 -- # set -e 00:25:41.729 05:21:18 -- nvmf/common.sh@125 -- # return 0 00:25:41.729 05:21:18 -- nvmf/common.sh@478 -- # '[' -n 1961971 ']' 00:25:41.729 05:21:18 -- nvmf/common.sh@479 -- # killprocess 1961971 00:25:41.729 05:21:18 -- common/autotest_common.sh@936 -- # '[' -z 1961971 ']' 00:25:41.729 05:21:18 -- common/autotest_common.sh@940 -- # kill -0 1961971 00:25:41.729 05:21:18 -- common/autotest_common.sh@941 -- # uname 00:25:41.729 05:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.729 05:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1961971 00:25:41.729 05:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:41.729 05:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:41.729 05:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1961971' 00:25:41.729 killing process with pid 1961971 00:25:41.729 05:21:18 -- common/autotest_common.sh@955 -- # kill 1961971 00:25:41.729 [2024-04-24 05:21:18.974527] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:41.729 05:21:18 -- common/autotest_common.sh@960 -- # wait 1961971 00:25:41.988 05:21:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:41.988 05:21:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:41.988 05:21:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:41.988 05:21:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.988 05:21:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.988 05:21:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.988 05:21:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.988 05:21:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.529 05:21:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.529 00:25:44.529 real 0m5.377s 00:25:44.529 user 0m4.106s 00:25:44.529 sys 0m1.914s 00:25:44.529 05:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.529 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:44.529 ************************************ 00:25:44.529 END TEST nvmf_aer 00:25:44.529 ************************************ 00:25:44.529 05:21:21 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:44.529 05:21:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:44.530 05:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.530 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:44.530 ************************************ 00:25:44.530 START TEST nvmf_async_init 00:25:44.530 ************************************ 00:25:44.530 05:21:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:44.530 * Looking for test storage... 00:25:44.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.530 05:21:21 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.530 05:21:21 -- nvmf/common.sh@7 -- # uname -s 00:25:44.530 05:21:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.530 05:21:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.530 05:21:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.530 05:21:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.530 05:21:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.530 05:21:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.530 05:21:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.530 05:21:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.530 05:21:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.530 05:21:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.530 05:21:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.530 05:21:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.530 05:21:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.530 05:21:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.530 05:21:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.530 05:21:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.530 05:21:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.530 05:21:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.530 05:21:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.530 05:21:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.530 05:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.530 05:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.530 05:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.530 05:21:21 -- paths/export.sh@5 -- # export PATH 00:25:44.530 05:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.530 05:21:21 -- nvmf/common.sh@47 -- # : 0 00:25:44.530 05:21:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.530 05:21:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.530 05:21:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.530 05:21:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.530 05:21:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.530 05:21:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.530 05:21:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.530 05:21:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.530 05:21:21 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:44.530 05:21:21 -- host/async_init.sh@14 -- # null_block_size=512 00:25:44.530 05:21:21 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:44.530 05:21:21 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:44.530 05:21:21 -- host/async_init.sh@20 -- # uuidgen 00:25:44.530 05:21:21 -- host/async_init.sh@20 -- # tr -d - 00:25:44.530 05:21:21 -- host/async_init.sh@20 -- # nguid=77e11950e9d243a7b4ca47469bbd3d18 00:25:44.530 05:21:21 -- host/async_init.sh@22 -- # nvmftestinit 00:25:44.530 05:21:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:44.530 05:21:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.530 05:21:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:44.530 05:21:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:44.530 05:21:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:44.530 05:21:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.530 05:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.530 05:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.530 05:21:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:44.530 05:21:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:44.530 05:21:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.530 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.434 05:21:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:46.435 05:21:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.435 05:21:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.435 05:21:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.435 05:21:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.435 05:21:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.435 05:21:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.435 05:21:23 -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.435 05:21:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.435 05:21:23 -- nvmf/common.sh@296 -- # e810=() 00:25:46.435 05:21:23 -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.435 05:21:23 -- nvmf/common.sh@297 -- # x722=() 00:25:46.435 05:21:23 -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.435 05:21:23 -- nvmf/common.sh@298 -- # mlx=() 00:25:46.435 05:21:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.435 05:21:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.435 05:21:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.435 05:21:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.435 05:21:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.435 05:21:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.435 05:21:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.435 05:21:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.435 05:21:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.435 05:21:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.435 05:21:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.435 05:21:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.435 05:21:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.435 05:21:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.435 05:21:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:46.435 05:21:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:46.435 05:21:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.435 05:21:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.435 05:21:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.435 05:21:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.435 05:21:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.435 05:21:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.435 05:21:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.435 05:21:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.435 05:21:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.435 05:21:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.435 05:21:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.435 05:21:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.435 05:21:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.435 05:21:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.435 05:21:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.435 05:21:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.435 05:21:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.435 05:21:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.435 05:21:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:46.435 00:25:46.435 --- 10.0.0.2 ping statistics --- 00:25:46.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.435 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:46.435 05:21:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:25:46.435 00:25:46.435 --- 10.0.0.1 ping statistics --- 00:25:46.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.435 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:46.435 05:21:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.435 05:21:23 -- nvmf/common.sh@411 -- # return 0 00:25:46.435 05:21:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:46.435 05:21:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.435 05:21:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:46.435 05:21:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.435 05:21:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:46.435 05:21:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:46.435 05:21:23 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:46.435 05:21:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:46.435 05:21:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:46.435 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.435 05:21:23 -- nvmf/common.sh@470 -- # nvmfpid=1963946 00:25:46.435 05:21:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:46.435 05:21:23 -- nvmf/common.sh@471 -- # waitforlisten 1963946 00:25:46.435 05:21:23 -- common/autotest_common.sh@817 -- # '[' -z 1963946 ']' 00:25:46.435 05:21:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.435 05:21:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.435 05:21:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.435 05:21:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.435 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.435 [2024-04-24 05:21:23.600558] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:46.435 [2024-04-24 05:21:23.600654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.435 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.435 [2024-04-24 05:21:23.638329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:46.435 [2024-04-24 05:21:23.665374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.695 [2024-04-24 05:21:23.753019] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.696 [2024-04-24 05:21:23.753076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.696 [2024-04-24 05:21:23.753091] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.696 [2024-04-24 05:21:23.753103] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.696 [2024-04-24 05:21:23.753113] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.696 [2024-04-24 05:21:23.753142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.696 05:21:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.696 05:21:23 -- common/autotest_common.sh@850 -- # return 0 00:25:46.696 05:21:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.696 05:21:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 05:21:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.696 05:21:23 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 [2024-04-24 05:21:23.895752] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 null0 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 77e11950e9d243a7b4ca47469bbd3d18 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.696 [2024-04-24 05:21:23.936010] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.696 05:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.696 05:21:23 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:46.696 05:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.696 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:46.956 nvme0n1 00:25:46.956 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.956 05:21:24 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:46.956 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.956 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:46.956 [ 00:25:46.956 { 00:25:46.956 "name": "nvme0n1", 00:25:46.956 "aliases": [ 00:25:46.956 "77e11950-e9d2-43a7-b4ca-47469bbd3d18" 00:25:46.956 ], 00:25:46.956 "product_name": "NVMe disk", 00:25:46.956 "block_size": 512, 00:25:46.956 "num_blocks": 2097152, 00:25:46.956 "uuid": "77e11950-e9d2-43a7-b4ca-47469bbd3d18", 00:25:46.956 "assigned_rate_limits": { 00:25:46.956 "rw_ios_per_sec": 0, 00:25:46.956 "rw_mbytes_per_sec": 0, 00:25:46.956 "r_mbytes_per_sec": 0, 00:25:46.956 "w_mbytes_per_sec": 0 00:25:46.956 }, 00:25:46.956 "claimed": false, 00:25:46.956 "zoned": false, 00:25:46.956 "supported_io_types": { 00:25:46.956 "read": true, 00:25:46.956 "write": true, 00:25:46.956 "unmap": false, 00:25:46.956 "write_zeroes": true, 00:25:46.956 "flush": true, 00:25:46.956 "reset": true, 00:25:46.956 "compare": true, 00:25:46.956 "compare_and_write": true, 00:25:46.956 "abort": true, 00:25:46.956 "nvme_admin": true, 00:25:46.956 "nvme_io": true 00:25:46.956 }, 00:25:46.956 "memory_domains": [ 00:25:46.956 { 00:25:46.956 "dma_device_id": "system", 00:25:46.956 "dma_device_type": 1 00:25:46.956 } 00:25:46.956 ], 00:25:46.956 "driver_specific": { 00:25:46.956 "nvme": [ 00:25:46.956 { 00:25:46.956 "trid": { 00:25:46.956 "trtype": "TCP", 00:25:46.956 "adrfam": "IPv4", 00:25:46.956 "traddr": "10.0.0.2", 00:25:46.956 "trsvcid": "4420", 00:25:46.956 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:46.956 }, 00:25:46.956 "ctrlr_data": { 00:25:46.956 "cntlid": 1, 00:25:46.956 "vendor_id": "0x8086", 00:25:46.956 "model_number": "SPDK bdev Controller", 00:25:46.956 "serial_number": "00000000000000000000", 00:25:46.956 "firmware_revision": "24.05", 00:25:46.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.956 "oacs": { 00:25:46.956 "security": 0, 00:25:46.956 "format": 0, 00:25:46.956 "firmware": 0, 00:25:46.956 "ns_manage": 0 00:25:46.956 }, 00:25:46.956 "multi_ctrlr": true, 00:25:46.956 "ana_reporting": false 00:25:46.956 }, 00:25:46.956 "vs": { 00:25:46.956 "nvme_version": "1.3" 00:25:46.956 }, 00:25:46.956 "ns_data": { 00:25:46.956 "id": 1, 00:25:46.956 "can_share": true 00:25:46.956 } 00:25:46.956 } 00:25:46.956 ], 00:25:46.956 "mp_policy": "active_passive" 00:25:46.956 } 00:25:46.956 } 00:25:46.956 ] 00:25:46.956 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.956 05:21:24 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:46.956 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.956 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:46.956 [2024-04-24 05:21:24.184664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.956 [2024-04-24 05:21:24.184777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2830 (9): Bad file descriptor 00:25:47.216 [2024-04-24 05:21:24.316770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 [ 00:25:47.216 { 00:25:47.216 "name": "nvme0n1", 00:25:47.216 "aliases": [ 00:25:47.216 "77e11950-e9d2-43a7-b4ca-47469bbd3d18" 00:25:47.216 ], 00:25:47.216 "product_name": "NVMe disk", 00:25:47.216 "block_size": 512, 00:25:47.216 "num_blocks": 2097152, 00:25:47.216 "uuid": "77e11950-e9d2-43a7-b4ca-47469bbd3d18", 00:25:47.216 "assigned_rate_limits": { 00:25:47.216 "rw_ios_per_sec": 0, 00:25:47.216 "rw_mbytes_per_sec": 0, 00:25:47.216 "r_mbytes_per_sec": 0, 00:25:47.216 "w_mbytes_per_sec": 0 00:25:47.216 }, 00:25:47.216 "claimed": false, 00:25:47.216 "zoned": false, 00:25:47.216 "supported_io_types": { 00:25:47.216 "read": true, 00:25:47.216 "write": true, 00:25:47.216 "unmap": false, 00:25:47.216 "write_zeroes": true, 00:25:47.216 "flush": true, 00:25:47.216 "reset": true, 00:25:47.216 "compare": true, 00:25:47.216 "compare_and_write": true, 00:25:47.216 "abort": true, 00:25:47.216 "nvme_admin": true, 00:25:47.216 "nvme_io": true 00:25:47.216 }, 00:25:47.216 "memory_domains": [ 00:25:47.216 { 00:25:47.216 "dma_device_id": "system", 00:25:47.216 "dma_device_type": 1 00:25:47.216 } 00:25:47.216 ], 00:25:47.216 "driver_specific": { 00:25:47.216 "nvme": [ 00:25:47.216 { 00:25:47.216 "trid": { 00:25:47.216 "trtype": "TCP", 00:25:47.216 "adrfam": "IPv4", 00:25:47.216 "traddr": "10.0.0.2", 00:25:47.216 "trsvcid": "4420", 00:25:47.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:47.216 }, 00:25:47.216 "ctrlr_data": { 00:25:47.216 "cntlid": 2, 00:25:47.216 "vendor_id": "0x8086", 00:25:47.216 "model_number": "SPDK bdev Controller", 00:25:47.216 "serial_number": "00000000000000000000", 00:25:47.216 "firmware_revision": "24.05", 00:25:47.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.216 "oacs": { 00:25:47.216 "security": 0, 00:25:47.216 "format": 0, 00:25:47.216 "firmware": 0, 00:25:47.216 "ns_manage": 0 00:25:47.216 }, 00:25:47.216 "multi_ctrlr": true, 00:25:47.216 "ana_reporting": false 00:25:47.216 }, 00:25:47.216 "vs": { 00:25:47.216 "nvme_version": "1.3" 00:25:47.216 }, 00:25:47.216 "ns_data": { 00:25:47.216 "id": 1, 00:25:47.216 "can_share": true 00:25:47.216 } 00:25:47.216 } 00:25:47.216 ], 00:25:47.216 "mp_policy": "active_passive" 00:25:47.216 } 00:25:47.216 } 00:25:47.216 ] 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@53 -- # mktemp 00:25:47.216 05:21:24 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.snUxfXINQU 00:25:47.216 05:21:24 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:47.216 05:21:24 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.snUxfXINQU 00:25:47.216 05:21:24 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 [2024-04-24 05:21:24.365240] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:47.216 [2024-04-24 05:21:24.365406] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.snUxfXINQU 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 [2024-04-24 05:21:24.373267] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.snUxfXINQU 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 [2024-04-24 05:21:24.381275] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.216 [2024-04-24 05:21:24.381347] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:47.216 nvme0n1 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:47.216 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.216 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 [ 00:25:47.216 { 00:25:47.216 "name": "nvme0n1", 00:25:47.216 "aliases": [ 00:25:47.216 "77e11950-e9d2-43a7-b4ca-47469bbd3d18" 00:25:47.216 ], 00:25:47.216 "product_name": "NVMe disk", 00:25:47.216 "block_size": 512, 00:25:47.216 "num_blocks": 2097152, 00:25:47.216 "uuid": "77e11950-e9d2-43a7-b4ca-47469bbd3d18", 00:25:47.216 "assigned_rate_limits": { 00:25:47.216 "rw_ios_per_sec": 0, 00:25:47.216 "rw_mbytes_per_sec": 0, 00:25:47.216 "r_mbytes_per_sec": 0, 00:25:47.216 "w_mbytes_per_sec": 0 00:25:47.216 }, 00:25:47.216 "claimed": false, 00:25:47.216 "zoned": false, 00:25:47.216 "supported_io_types": { 00:25:47.216 "read": true, 00:25:47.216 "write": true, 00:25:47.216 "unmap": false, 00:25:47.216 "write_zeroes": true, 00:25:47.216 "flush": true, 00:25:47.216 "reset": true, 00:25:47.216 "compare": true, 00:25:47.216 "compare_and_write": true, 00:25:47.216 "abort": true, 00:25:47.216 "nvme_admin": true, 00:25:47.216 "nvme_io": true 00:25:47.216 }, 00:25:47.216 "memory_domains": [ 00:25:47.216 { 00:25:47.216 "dma_device_id": "system", 00:25:47.216 "dma_device_type": 1 00:25:47.216 } 00:25:47.216 ], 00:25:47.216 "driver_specific": { 00:25:47.216 "nvme": [ 00:25:47.216 { 00:25:47.216 "trid": { 00:25:47.216 "trtype": "TCP", 00:25:47.216 "adrfam": "IPv4", 00:25:47.216 "traddr": "10.0.0.2", 00:25:47.216 "trsvcid": "4421", 00:25:47.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:47.216 }, 00:25:47.216 "ctrlr_data": { 00:25:47.216 "cntlid": 3, 00:25:47.216 "vendor_id": "0x8086", 00:25:47.216 "model_number": "SPDK bdev Controller", 00:25:47.216 "serial_number": "00000000000000000000", 00:25:47.216 "firmware_revision": "24.05", 00:25:47.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.216 "oacs": { 00:25:47.216 "security": 0, 00:25:47.216 "format": 0, 00:25:47.216 "firmware": 0, 00:25:47.216 "ns_manage": 0 00:25:47.216 }, 00:25:47.216 "multi_ctrlr": true, 00:25:47.216 "ana_reporting": false 00:25:47.216 }, 00:25:47.216 "vs": { 00:25:47.216 "nvme_version": "1.3" 00:25:47.216 }, 00:25:47.216 "ns_data": { 00:25:47.216 "id": 1, 00:25:47.216 "can_share": true 00:25:47.216 } 00:25:47.216 } 00:25:47.216 ], 00:25:47.216 "mp_policy": "active_passive" 00:25:47.216 } 00:25:47.216 } 00:25:47.216 ] 00:25:47.216 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.216 05:21:24 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.217 05:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.217 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:47.217 05:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.217 05:21:24 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.snUxfXINQU 00:25:47.217 05:21:24 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:47.217 05:21:24 -- host/async_init.sh@78 -- # nvmftestfini 00:25:47.217 05:21:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:47.217 05:21:24 -- nvmf/common.sh@117 -- # sync 00:25:47.475 05:21:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.475 05:21:24 -- nvmf/common.sh@120 -- # set +e 00:25:47.475 05:21:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.475 05:21:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.475 rmmod nvme_tcp 00:25:47.475 rmmod nvme_fabrics 00:25:47.475 rmmod nvme_keyring 00:25:47.475 05:21:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.475 05:21:24 -- nvmf/common.sh@124 -- # set -e 00:25:47.475 05:21:24 -- nvmf/common.sh@125 -- # return 0 00:25:47.475 05:21:24 -- nvmf/common.sh@478 -- # '[' -n 1963946 ']' 00:25:47.475 05:21:24 -- nvmf/common.sh@479 -- # killprocess 1963946 00:25:47.475 05:21:24 -- common/autotest_common.sh@936 -- # '[' -z 1963946 ']' 00:25:47.475 05:21:24 -- common/autotest_common.sh@940 -- # kill -0 1963946 00:25:47.475 05:21:24 -- common/autotest_common.sh@941 -- # uname 00:25:47.475 05:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:47.475 05:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1963946 00:25:47.475 05:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:47.475 05:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:47.475 05:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1963946' 00:25:47.475 killing process with pid 1963946 00:25:47.475 05:21:24 -- common/autotest_common.sh@955 -- # kill 1963946 00:25:47.475 [2024-04-24 05:21:24.566834] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.475 [2024-04-24 05:21:24.566868] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:47.475 05:21:24 -- common/autotest_common.sh@960 -- # wait 1963946 00:25:47.735 05:21:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:47.735 05:21:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:47.735 05:21:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:47.735 05:21:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.735 05:21:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.735 05:21:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.735 05:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.735 05:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.641 05:21:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:49.641 00:25:49.641 real 0m5.458s 00:25:49.641 user 0m2.046s 00:25:49.641 sys 0m1.796s 00:25:49.641 05:21:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:49.641 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 ************************************ 00:25:49.641 END TEST nvmf_async_init 00:25:49.641 ************************************ 00:25:49.641 05:21:26 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:49.641 05:21:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.641 05:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.641 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.900 ************************************ 00:25:49.900 START TEST dma 00:25:49.900 ************************************ 00:25:49.900 05:21:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:49.900 * Looking for test storage... 00:25:49.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.900 05:21:26 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.900 05:21:26 -- nvmf/common.sh@7 -- # uname -s 00:25:49.900 05:21:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.900 05:21:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.900 05:21:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.900 05:21:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.900 05:21:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.900 05:21:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.900 05:21:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.900 05:21:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.900 05:21:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.900 05:21:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.900 05:21:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.900 05:21:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.900 05:21:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.900 05:21:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.900 05:21:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.900 05:21:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.900 05:21:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.900 05:21:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.900 05:21:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.900 05:21:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.900 05:21:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:26 -- paths/export.sh@5 -- # export PATH 00:25:49.900 05:21:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:26 -- nvmf/common.sh@47 -- # : 0 00:25:49.900 05:21:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.900 05:21:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.900 05:21:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.900 05:21:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.900 05:21:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.900 05:21:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.900 05:21:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.900 05:21:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.900 05:21:26 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:49.900 05:21:26 -- host/dma.sh@13 -- # exit 0 00:25:49.900 00:25:49.900 real 0m0.069s 00:25:49.900 user 0m0.030s 00:25:49.900 sys 0m0.044s 00:25:49.900 05:21:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:49.900 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.900 ************************************ 00:25:49.900 END TEST dma 00:25:49.900 ************************************ 00:25:49.900 05:21:27 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:49.900 05:21:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.900 05:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.900 05:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:49.900 ************************************ 00:25:49.900 START TEST nvmf_identify 00:25:49.900 ************************************ 00:25:49.900 05:21:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:49.900 * Looking for test storage... 00:25:49.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.900 05:21:27 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.900 05:21:27 -- nvmf/common.sh@7 -- # uname -s 00:25:49.900 05:21:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.900 05:21:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.900 05:21:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.900 05:21:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.900 05:21:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.900 05:21:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.900 05:21:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.900 05:21:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.900 05:21:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.900 05:21:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.900 05:21:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.900 05:21:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.900 05:21:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.900 05:21:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.900 05:21:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.900 05:21:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.900 05:21:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.900 05:21:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.900 05:21:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.900 05:21:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.900 05:21:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:27 -- paths/export.sh@5 -- # export PATH 00:25:49.900 05:21:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.900 05:21:27 -- nvmf/common.sh@47 -- # : 0 00:25:49.900 05:21:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.900 05:21:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.900 05:21:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.901 05:21:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.901 05:21:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.901 05:21:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.901 05:21:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.901 05:21:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.901 05:21:27 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:49.901 05:21:27 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:49.901 05:21:27 -- host/identify.sh@14 -- # nvmftestinit 00:25:49.901 05:21:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:49.901 05:21:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.901 05:21:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:49.901 05:21:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:49.901 05:21:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:49.901 05:21:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.901 05:21:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.901 05:21:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.901 05:21:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:49.901 05:21:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:49.901 05:21:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.901 05:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:52.434 05:21:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:52.434 05:21:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.434 05:21:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.434 05:21:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.434 05:21:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.434 05:21:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.434 05:21:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.434 05:21:29 -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.434 05:21:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.434 05:21:29 -- nvmf/common.sh@296 -- # e810=() 00:25:52.434 05:21:29 -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.434 05:21:29 -- nvmf/common.sh@297 -- # x722=() 00:25:52.434 05:21:29 -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.434 05:21:29 -- nvmf/common.sh@298 -- # mlx=() 00:25:52.434 05:21:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.434 05:21:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.434 05:21:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.434 05:21:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:52.434 05:21:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.434 05:21:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:52.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:52.434 05:21:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.434 05:21:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:52.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:52.434 05:21:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.434 05:21:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.434 05:21:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.434 05:21:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:52.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:52.434 05:21:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.434 05:21:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.434 05:21:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.434 05:21:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.434 05:21:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:52.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:52.434 05:21:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.434 05:21:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:52.434 05:21:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:52.434 05:21:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:52.434 05:21:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.434 05:21:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.434 05:21:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.434 05:21:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:52.434 05:21:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.434 05:21:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.434 05:21:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:52.434 05:21:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.434 05:21:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.434 05:21:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:52.434 05:21:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:52.434 05:21:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.434 05:21:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.434 05:21:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.434 05:21:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.434 05:21:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:52.434 05:21:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.434 05:21:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.434 05:21:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.434 05:21:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:52.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:25:52.434 00:25:52.434 --- 10.0.0.2 ping statistics --- 00:25:52.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.434 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:52.435 05:21:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:25:52.435 00:25:52.435 --- 10.0.0.1 ping statistics --- 00:25:52.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.435 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:52.435 05:21:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.435 05:21:29 -- nvmf/common.sh@411 -- # return 0 00:25:52.435 05:21:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:52.435 05:21:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.435 05:21:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:52.435 05:21:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:52.435 05:21:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.435 05:21:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:52.435 05:21:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:52.435 05:21:29 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:52.435 05:21:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:52.435 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.435 05:21:29 -- host/identify.sh@19 -- # nvmfpid=1966151 00:25:52.435 05:21:29 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:52.435 05:21:29 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.435 05:21:29 -- host/identify.sh@23 -- # waitforlisten 1966151 00:25:52.435 05:21:29 -- common/autotest_common.sh@817 -- # '[' -z 1966151 ']' 00:25:52.435 05:21:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.435 05:21:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:52.435 05:21:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.435 05:21:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:52.435 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.435 [2024-04-24 05:21:29.450382] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:52.435 [2024-04-24 05:21:29.450464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.435 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.435 [2024-04-24 05:21:29.488201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:52.435 [2024-04-24 05:21:29.518830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:52.435 [2024-04-24 05:21:29.611195] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.435 [2024-04-24 05:21:29.611247] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.435 [2024-04-24 05:21:29.611276] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.435 [2024-04-24 05:21:29.611288] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.435 [2024-04-24 05:21:29.611298] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.435 [2024-04-24 05:21:29.611359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.435 [2024-04-24 05:21:29.611387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.435 [2024-04-24 05:21:29.611499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.435 [2024-04-24 05:21:29.611502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.696 05:21:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.696 05:21:29 -- common/autotest_common.sh@850 -- # return 0 00:25:52.696 05:21:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 [2024-04-24 05:21:29.747405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:52.696 05:21:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 05:21:29 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 Malloc0 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 [2024-04-24 05:21:29.822562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:52.696 05:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.696 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.696 [2024-04-24 05:21:29.838357] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:52.696 [ 00:25:52.696 { 00:25:52.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:52.696 "subtype": "Discovery", 00:25:52.696 "listen_addresses": [ 00:25:52.696 { 00:25:52.696 "transport": "TCP", 00:25:52.696 "trtype": "TCP", 00:25:52.696 "adrfam": "IPv4", 00:25:52.696 "traddr": "10.0.0.2", 00:25:52.696 "trsvcid": "4420" 00:25:52.696 } 00:25:52.696 ], 00:25:52.696 "allow_any_host": true, 00:25:52.696 "hosts": [] 00:25:52.696 }, 00:25:52.696 { 00:25:52.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.696 "subtype": "NVMe", 00:25:52.696 "listen_addresses": [ 00:25:52.696 { 00:25:52.696 "transport": "TCP", 00:25:52.696 "trtype": "TCP", 00:25:52.696 "adrfam": "IPv4", 00:25:52.696 "traddr": "10.0.0.2", 00:25:52.696 "trsvcid": "4420" 00:25:52.696 } 00:25:52.696 ], 00:25:52.696 "allow_any_host": true, 00:25:52.696 "hosts": [], 00:25:52.696 "serial_number": "SPDK00000000000001", 00:25:52.696 "model_number": "SPDK bdev Controller", 00:25:52.696 "max_namespaces": 32, 00:25:52.696 "min_cntlid": 1, 00:25:52.696 "max_cntlid": 65519, 00:25:52.696 "namespaces": [ 00:25:52.696 { 00:25:52.696 "nsid": 1, 00:25:52.696 "bdev_name": "Malloc0", 00:25:52.696 "name": "Malloc0", 00:25:52.696 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:52.696 "eui64": "ABCDEF0123456789", 00:25:52.696 "uuid": "154cec22-80fb-48a7-8259-12e9309fbe56" 00:25:52.696 } 00:25:52.696 ] 00:25:52.696 } 00:25:52.696 ] 00:25:52.696 05:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.696 05:21:29 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:52.696 [2024-04-24 05:21:29.859942] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:52.696 [2024-04-24 05:21:29.859980] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966229 ] 00:25:52.696 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.696 [2024-04-24 05:21:29.876287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:52.696 [2024-04-24 05:21:29.893834] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:52.696 [2024-04-24 05:21:29.893888] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:52.696 [2024-04-24 05:21:29.893898] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:52.696 [2024-04-24 05:21:29.893915] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:52.696 [2024-04-24 05:21:29.893928] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:52.696 [2024-04-24 05:21:29.894217] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:52.696 [2024-04-24 05:21:29.894273] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23b1130 0 00:25:52.696 [2024-04-24 05:21:29.908645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:52.696 [2024-04-24 05:21:29.908669] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:52.696 [2024-04-24 05:21:29.908678] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:52.696 [2024-04-24 05:21:29.908684] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:52.696 [2024-04-24 05:21:29.908739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.908752] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.908760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.696 [2024-04-24 05:21:29.908779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:52.696 [2024-04-24 05:21:29.908807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.696 [2024-04-24 05:21:29.916659] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.696 [2024-04-24 05:21:29.916678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.696 [2024-04-24 05:21:29.916686] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.916694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.696 [2024-04-24 05:21:29.916713] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:52.696 [2024-04-24 05:21:29.916727] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:52.696 [2024-04-24 05:21:29.916736] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:52.696 [2024-04-24 05:21:29.916762] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.916771] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.916777] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.696 [2024-04-24 05:21:29.916789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.696 [2024-04-24 05:21:29.916812] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.696 [2024-04-24 05:21:29.917003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.696 [2024-04-24 05:21:29.917019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.696 [2024-04-24 05:21:29.917026] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.696 [2024-04-24 05:21:29.917044] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:52.696 [2024-04-24 05:21:29.917058] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:52.696 [2024-04-24 05:21:29.917075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917083] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917089] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.696 [2024-04-24 05:21:29.917100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.696 [2024-04-24 05:21:29.917138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.696 [2024-04-24 05:21:29.917275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.696 [2024-04-24 05:21:29.917293] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.696 [2024-04-24 05:21:29.917300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917307] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.696 [2024-04-24 05:21:29.917318] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:52.696 [2024-04-24 05:21:29.917333] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:52.696 [2024-04-24 05:21:29.917348] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917356] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.696 [2024-04-24 05:21:29.917362] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.696 [2024-04-24 05:21:29.917373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.917394] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.917512] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.917529] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.917537] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917544] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.917555] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:52.697 [2024-04-24 05:21:29.917573] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917584] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917591] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.917606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.917636] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.917766] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.917782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.917789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917796] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.917806] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:52.697 [2024-04-24 05:21:29.917815] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:52.697 [2024-04-24 05:21:29.917829] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:52.697 [2024-04-24 05:21:29.917943] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:52.697 [2024-04-24 05:21:29.917966] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:52.697 [2024-04-24 05:21:29.917981] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.917994] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.918004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.918041] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.918230] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.918247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.918255] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918261] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.918272] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:52.697 [2024-04-24 05:21:29.918290] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918301] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918308] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.918319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.918340] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.918456] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.918473] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.918481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.918497] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:52.697 [2024-04-24 05:21:29.918506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.918525] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:52.697 [2024-04-24 05:21:29.918542] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.918560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918568] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.918579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.918601] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.918770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.697 [2024-04-24 05:21:29.918789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.697 [2024-04-24 05:21:29.918802] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918813] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23b1130): datao=0, datal=4096, cccid=0 00:25:52.697 [2024-04-24 05:21:29.918825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2419860) on tqpair(0x23b1130): expected_datao=0, payload_size=4096 00:25:52.697 [2024-04-24 05:21:29.918836] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918862] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.918874] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.959807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.959828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.959836] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.959843] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.959857] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:52.697 [2024-04-24 05:21:29.959866] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:52.697 [2024-04-24 05:21:29.959874] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:52.697 [2024-04-24 05:21:29.959882] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:52.697 [2024-04-24 05:21:29.959890] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:52.697 [2024-04-24 05:21:29.959898] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.959915] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.959931] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.959939] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.959945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.959957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:52.697 [2024-04-24 05:21:29.959981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.697 [2024-04-24 05:21:29.960107] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.697 [2024-04-24 05:21:29.960125] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.697 [2024-04-24 05:21:29.960137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419860) on tqpair=0x23b1130 00:25:52.697 [2024-04-24 05:21:29.960159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960166] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.960183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.697 [2024-04-24 05:21:29.960193] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960200] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.960215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.697 [2024-04-24 05:21:29.960224] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.960246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.697 [2024-04-24 05:21:29.960271] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960284] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.960293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.697 [2024-04-24 05:21:29.960301] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.960336] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:52.697 [2024-04-24 05:21:29.960350] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.697 [2024-04-24 05:21:29.960357] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23b1130) 00:25:52.697 [2024-04-24 05:21:29.960367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.697 [2024-04-24 05:21:29.960389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419860, cid 0, qid 0 00:25:52.698 [2024-04-24 05:21:29.960414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24199c0, cid 1, qid 0 00:25:52.698 [2024-04-24 05:21:29.960422] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419b20, cid 2, qid 0 00:25:52.698 [2024-04-24 05:21:29.960430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.698 [2024-04-24 05:21:29.960437] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419de0, cid 4, qid 0 00:25:52.698 [2024-04-24 05:21:29.964641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.698 [2024-04-24 05:21:29.964659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.698 [2024-04-24 05:21:29.964666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.964673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419de0) on tqpair=0x23b1130 00:25:52.698 [2024-04-24 05:21:29.964684] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:52.698 [2024-04-24 05:21:29.964697] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:52.698 [2024-04-24 05:21:29.964717] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.964728] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23b1130) 00:25:52.698 [2024-04-24 05:21:29.964739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.698 [2024-04-24 05:21:29.964762] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419de0, cid 4, qid 0 00:25:52.698 [2024-04-24 05:21:29.964959] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.698 [2024-04-24 05:21:29.964976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.698 [2024-04-24 05:21:29.964983] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.964992] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23b1130): datao=0, datal=4096, cccid=4 00:25:52.698 [2024-04-24 05:21:29.965004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2419de0) on tqpair(0x23b1130): expected_datao=0, payload_size=4096 00:25:52.698 [2024-04-24 05:21:29.965015] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965031] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965043] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965061] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.698 [2024-04-24 05:21:29.965074] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.698 [2024-04-24 05:21:29.965081] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965087] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419de0) on tqpair=0x23b1130 00:25:52.698 [2024-04-24 05:21:29.965109] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:52.698 [2024-04-24 05:21:29.965142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965151] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23b1130) 00:25:52.698 [2024-04-24 05:21:29.965162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.698 [2024-04-24 05:21:29.965183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.698 [2024-04-24 05:21:29.965197] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23b1130) 00:25:52.961 [2024-04-24 05:21:29.965207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.961 [2024-04-24 05:21:29.965251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419de0, cid 4, qid 0 00:25:52.961 [2024-04-24 05:21:29.965263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419f40, cid 5, qid 0 00:25:52.961 [2024-04-24 05:21:29.965511] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.961 [2024-04-24 05:21:29.965528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.961 [2024-04-24 05:21:29.965535] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:29.965556] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23b1130): datao=0, datal=1024, cccid=4 00:25:52.961 [2024-04-24 05:21:29.965564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2419de0) on tqpair(0x23b1130): expected_datao=0, payload_size=1024 00:25:52.961 [2024-04-24 05:21:29.965572] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:29.965581] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:29.965589] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:29.965597] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.961 [2024-04-24 05:21:29.965610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.961 [2024-04-24 05:21:29.965618] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:29.965624] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419f40) on tqpair=0x23b1130 00:25:52.961 [2024-04-24 05:21:30.005807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.961 [2024-04-24 05:21:30.005827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.961 [2024-04-24 05:21:30.005836] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.005843] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419de0) on tqpair=0x23b1130 00:25:52.961 [2024-04-24 05:21:30.005872] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.005882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23b1130) 00:25:52.961 [2024-04-24 05:21:30.005894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.961 [2024-04-24 05:21:30.005940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419de0, cid 4, qid 0 00:25:52.961 [2024-04-24 05:21:30.006103] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.961 [2024-04-24 05:21:30.006120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.961 [2024-04-24 05:21:30.006128] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006136] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23b1130): datao=0, datal=3072, cccid=4 00:25:52.961 [2024-04-24 05:21:30.006149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2419de0) on tqpair(0x23b1130): expected_datao=0, payload_size=3072 00:25:52.961 [2024-04-24 05:21:30.006160] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006176] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006188] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006206] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.961 [2024-04-24 05:21:30.006219] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.961 [2024-04-24 05:21:30.006226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419de0) on tqpair=0x23b1130 00:25:52.961 [2024-04-24 05:21:30.006249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006258] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23b1130) 00:25:52.961 [2024-04-24 05:21:30.006269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.961 [2024-04-24 05:21:30.006300] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419de0, cid 4, qid 0 00:25:52.961 [2024-04-24 05:21:30.006451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.961 [2024-04-24 05:21:30.006467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.961 [2024-04-24 05:21:30.006474] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006481] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23b1130): datao=0, datal=8, cccid=4 00:25:52.961 [2024-04-24 05:21:30.006489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2419de0) on tqpair(0x23b1130): expected_datao=0, payload_size=8 00:25:52.961 [2024-04-24 05:21:30.006497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006506] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.006514] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.046821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.961 [2024-04-24 05:21:30.046862] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.961 [2024-04-24 05:21:30.046871] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.961 [2024-04-24 05:21:30.046879] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419de0) on tqpair=0x23b1130 00:25:52.961 ===================================================== 00:25:52.961 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:52.961 ===================================================== 00:25:52.961 Controller Capabilities/Features 00:25:52.961 ================================ 00:25:52.961 Vendor ID: 0000 00:25:52.961 Subsystem Vendor ID: 0000 00:25:52.961 Serial Number: .................... 00:25:52.961 Model Number: ........................................ 00:25:52.961 Firmware Version: 24.05 00:25:52.961 Recommended Arb Burst: 0 00:25:52.961 IEEE OUI Identifier: 00 00 00 00:25:52.961 Multi-path I/O 00:25:52.961 May have multiple subsystem ports: No 00:25:52.961 May have multiple controllers: No 00:25:52.961 Associated with SR-IOV VF: No 00:25:52.961 Max Data Transfer Size: 131072 00:25:52.961 Max Number of Namespaces: 0 00:25:52.961 Max Number of I/O Queues: 1024 00:25:52.961 NVMe Specification Version (VS): 1.3 00:25:52.961 NVMe Specification Version (Identify): 1.3 00:25:52.961 Maximum Queue Entries: 128 00:25:52.961 Contiguous Queues Required: Yes 00:25:52.961 Arbitration Mechanisms Supported 00:25:52.961 Weighted Round Robin: Not Supported 00:25:52.961 Vendor Specific: Not Supported 00:25:52.961 Reset Timeout: 15000 ms 00:25:52.961 Doorbell Stride: 4 bytes 00:25:52.961 NVM Subsystem Reset: Not Supported 00:25:52.961 Command Sets Supported 00:25:52.961 NVM Command Set: Supported 00:25:52.961 Boot Partition: Not Supported 00:25:52.961 Memory Page Size Minimum: 4096 bytes 00:25:52.961 Memory Page Size Maximum: 4096 bytes 00:25:52.961 Persistent Memory Region: Not Supported 00:25:52.961 Optional Asynchronous Events Supported 00:25:52.961 Namespace Attribute Notices: Not Supported 00:25:52.961 Firmware Activation Notices: Not Supported 00:25:52.961 ANA Change Notices: Not Supported 00:25:52.961 PLE Aggregate Log Change Notices: Not Supported 00:25:52.961 LBA Status Info Alert Notices: Not Supported 00:25:52.961 EGE Aggregate Log Change Notices: Not Supported 00:25:52.961 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.961 Zone Descriptor Change Notices: Not Supported 00:25:52.961 Discovery Log Change Notices: Supported 00:25:52.961 Controller Attributes 00:25:52.961 128-bit Host Identifier: Not Supported 00:25:52.961 Non-Operational Permissive Mode: Not Supported 00:25:52.961 NVM Sets: Not Supported 00:25:52.961 Read Recovery Levels: Not Supported 00:25:52.961 Endurance Groups: Not Supported 00:25:52.961 Predictable Latency Mode: Not Supported 00:25:52.961 Traffic Based Keep ALive: Not Supported 00:25:52.961 Namespace Granularity: Not Supported 00:25:52.961 SQ Associations: Not Supported 00:25:52.962 UUID List: Not Supported 00:25:52.962 Multi-Domain Subsystem: Not Supported 00:25:52.962 Fixed Capacity Management: Not Supported 00:25:52.962 Variable Capacity Management: Not Supported 00:25:52.962 Delete Endurance Group: Not Supported 00:25:52.962 Delete NVM Set: Not Supported 00:25:52.962 Extended LBA Formats Supported: Not Supported 00:25:52.962 Flexible Data Placement Supported: Not Supported 00:25:52.962 00:25:52.962 Controller Memory Buffer Support 00:25:52.962 ================================ 00:25:52.962 Supported: No 00:25:52.962 00:25:52.962 Persistent Memory Region Support 00:25:52.962 ================================ 00:25:52.962 Supported: No 00:25:52.962 00:25:52.962 Admin Command Set Attributes 00:25:52.962 ============================ 00:25:52.962 Security Send/Receive: Not Supported 00:25:52.962 Format NVM: Not Supported 00:25:52.962 Firmware Activate/Download: Not Supported 00:25:52.962 Namespace Management: Not Supported 00:25:52.962 Device Self-Test: Not Supported 00:25:52.962 Directives: Not Supported 00:25:52.962 NVMe-MI: Not Supported 00:25:52.962 Virtualization Management: Not Supported 00:25:52.962 Doorbell Buffer Config: Not Supported 00:25:52.962 Get LBA Status Capability: Not Supported 00:25:52.962 Command & Feature Lockdown Capability: Not Supported 00:25:52.962 Abort Command Limit: 1 00:25:52.962 Async Event Request Limit: 4 00:25:52.962 Number of Firmware Slots: N/A 00:25:52.962 Firmware Slot 1 Read-Only: N/A 00:25:52.962 Firmware Activation Without Reset: N/A 00:25:52.962 Multiple Update Detection Support: N/A 00:25:52.962 Firmware Update Granularity: No Information Provided 00:25:52.962 Per-Namespace SMART Log: No 00:25:52.962 Asymmetric Namespace Access Log Page: Not Supported 00:25:52.962 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:52.962 Command Effects Log Page: Not Supported 00:25:52.962 Get Log Page Extended Data: Supported 00:25:52.962 Telemetry Log Pages: Not Supported 00:25:52.962 Persistent Event Log Pages: Not Supported 00:25:52.962 Supported Log Pages Log Page: May Support 00:25:52.962 Commands Supported & Effects Log Page: Not Supported 00:25:52.962 Feature Identifiers & Effects Log Page:May Support 00:25:52.962 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.962 Data Area 4 for Telemetry Log: Not Supported 00:25:52.962 Error Log Page Entries Supported: 128 00:25:52.962 Keep Alive: Not Supported 00:25:52.962 00:25:52.962 NVM Command Set Attributes 00:25:52.962 ========================== 00:25:52.962 Submission Queue Entry Size 00:25:52.962 Max: 1 00:25:52.962 Min: 1 00:25:52.962 Completion Queue Entry Size 00:25:52.962 Max: 1 00:25:52.962 Min: 1 00:25:52.962 Number of Namespaces: 0 00:25:52.962 Compare Command: Not Supported 00:25:52.962 Write Uncorrectable Command: Not Supported 00:25:52.962 Dataset Management Command: Not Supported 00:25:52.962 Write Zeroes Command: Not Supported 00:25:52.962 Set Features Save Field: Not Supported 00:25:52.962 Reservations: Not Supported 00:25:52.962 Timestamp: Not Supported 00:25:52.962 Copy: Not Supported 00:25:52.962 Volatile Write Cache: Not Present 00:25:52.962 Atomic Write Unit (Normal): 1 00:25:52.962 Atomic Write Unit (PFail): 1 00:25:52.962 Atomic Compare & Write Unit: 1 00:25:52.962 Fused Compare & Write: Supported 00:25:52.962 Scatter-Gather List 00:25:52.962 SGL Command Set: Supported 00:25:52.962 SGL Keyed: Supported 00:25:52.962 SGL Bit Bucket Descriptor: Not Supported 00:25:52.962 SGL Metadata Pointer: Not Supported 00:25:52.962 Oversized SGL: Not Supported 00:25:52.962 SGL Metadata Address: Not Supported 00:25:52.962 SGL Offset: Supported 00:25:52.962 Transport SGL Data Block: Not Supported 00:25:52.962 Replay Protected Memory Block: Not Supported 00:25:52.962 00:25:52.962 Firmware Slot Information 00:25:52.962 ========================= 00:25:52.962 Active slot: 0 00:25:52.962 00:25:52.962 00:25:52.962 Error Log 00:25:52.962 ========= 00:25:52.962 00:25:52.962 Active Namespaces 00:25:52.962 ================= 00:25:52.962 Discovery Log Page 00:25:52.962 ================== 00:25:52.962 Generation Counter: 2 00:25:52.962 Number of Records: 2 00:25:52.962 Record Format: 0 00:25:52.962 00:25:52.962 Discovery Log Entry 0 00:25:52.962 ---------------------- 00:25:52.962 Transport Type: 3 (TCP) 00:25:52.962 Address Family: 1 (IPv4) 00:25:52.962 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:52.962 Entry Flags: 00:25:52.962 Duplicate Returned Information: 1 00:25:52.962 Explicit Persistent Connection Support for Discovery: 1 00:25:52.962 Transport Requirements: 00:25:52.962 Secure Channel: Not Required 00:25:52.962 Port ID: 0 (0x0000) 00:25:52.962 Controller ID: 65535 (0xffff) 00:25:52.962 Admin Max SQ Size: 128 00:25:52.962 Transport Service Identifier: 4420 00:25:52.962 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:52.962 Transport Address: 10.0.0.2 00:25:52.962 Discovery Log Entry 1 00:25:52.962 ---------------------- 00:25:52.962 Transport Type: 3 (TCP) 00:25:52.962 Address Family: 1 (IPv4) 00:25:52.962 Subsystem Type: 2 (NVM Subsystem) 00:25:52.962 Entry Flags: 00:25:52.962 Duplicate Returned Information: 0 00:25:52.962 Explicit Persistent Connection Support for Discovery: 0 00:25:52.962 Transport Requirements: 00:25:52.962 Secure Channel: Not Required 00:25:52.962 Port ID: 0 (0x0000) 00:25:52.962 Controller ID: 65535 (0xffff) 00:25:52.962 Admin Max SQ Size: 128 00:25:52.962 Transport Service Identifier: 4420 00:25:52.962 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:52.962 Transport Address: 10.0.0.2 [2024-04-24 05:21:30.047003] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:52.962 [2024-04-24 05:21:30.047034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.962 [2024-04-24 05:21:30.047049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.962 [2024-04-24 05:21:30.047059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.962 [2024-04-24 05:21:30.047069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.962 [2024-04-24 05:21:30.047087] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047111] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047118] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.962 [2024-04-24 05:21:30.047133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.962 [2024-04-24 05:21:30.047158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.962 [2024-04-24 05:21:30.047322] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.962 [2024-04-24 05:21:30.047339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.962 [2024-04-24 05:21:30.047346] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047353] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.962 [2024-04-24 05:21:30.047369] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.962 [2024-04-24 05:21:30.047393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.962 [2024-04-24 05:21:30.047422] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.962 [2024-04-24 05:21:30.047577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.962 [2024-04-24 05:21:30.047594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.962 [2024-04-24 05:21:30.047601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.962 [2024-04-24 05:21:30.047619] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:52.962 [2024-04-24 05:21:30.047637] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:52.962 [2024-04-24 05:21:30.047659] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.962 [2024-04-24 05:21:30.047687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.962 [2024-04-24 05:21:30.047709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.962 [2024-04-24 05:21:30.047846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.962 [2024-04-24 05:21:30.047867] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.962 [2024-04-24 05:21:30.047875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.962 [2024-04-24 05:21:30.047882] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.962 [2024-04-24 05:21:30.047903] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.047914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.047921] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.963 [2024-04-24 05:21:30.047931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.047953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.963 [2024-04-24 05:21:30.048102] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.048117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.048125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048131] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.963 [2024-04-24 05:21:30.048152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048163] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048170] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.963 [2024-04-24 05:21:30.048180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.048202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.963 [2024-04-24 05:21:30.048340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.048356] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.048364] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048370] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.963 [2024-04-24 05:21:30.048390] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048401] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048408] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.963 [2024-04-24 05:21:30.048419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.048440] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.963 [2024-04-24 05:21:30.048574] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.048590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.048597] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.048604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.963 [2024-04-24 05:21:30.048624] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.052662] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.052670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23b1130) 00:25:52.963 [2024-04-24 05:21:30.052682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.052706] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2419c80, cid 3, qid 0 00:25:52.963 [2024-04-24 05:21:30.052860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.052877] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.052889] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.052896] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2419c80) on tqpair=0x23b1130 00:25:52.963 [2024-04-24 05:21:30.052913] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:52.963 00:25:52.963 05:21:30 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:52.963 [2024-04-24 05:21:30.084211] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:52.963 [2024-04-24 05:21:30.084264] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966231 ] 00:25:52.963 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.963 [2024-04-24 05:21:30.101271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:52.963 [2024-04-24 05:21:30.119087] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:52.963 [2024-04-24 05:21:30.119135] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:52.963 [2024-04-24 05:21:30.119144] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:52.963 [2024-04-24 05:21:30.119161] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:52.963 [2024-04-24 05:21:30.119173] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:52.963 [2024-04-24 05:21:30.119475] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:52.963 [2024-04-24 05:21:30.119516] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xda2130 0 00:25:52.963 [2024-04-24 05:21:30.125643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:52.963 [2024-04-24 05:21:30.125666] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:52.963 [2024-04-24 05:21:30.125677] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:52.963 [2024-04-24 05:21:30.125683] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:52.963 [2024-04-24 05:21:30.125736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.125748] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.125755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.963 [2024-04-24 05:21:30.125771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:52.963 [2024-04-24 05:21:30.125797] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.963 [2024-04-24 05:21:30.133652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.133672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.133680] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.133687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.963 [2024-04-24 05:21:30.133702] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:52.963 [2024-04-24 05:21:30.133713] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:52.963 [2024-04-24 05:21:30.133723] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:52.963 [2024-04-24 05:21:30.133744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.133753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.133760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.963 [2024-04-24 05:21:30.133771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.133796] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.963 [2024-04-24 05:21:30.133967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.133983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.133990] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.133997] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.963 [2024-04-24 05:21:30.134005] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:52.963 [2024-04-24 05:21:30.134019] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:52.963 [2024-04-24 05:21:30.134032] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134039] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134045] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.963 [2024-04-24 05:21:30.134056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.134079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.963 [2024-04-24 05:21:30.134247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.134263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.134270] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134276] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.963 [2024-04-24 05:21:30.134285] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:52.963 [2024-04-24 05:21:30.134299] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:52.963 [2024-04-24 05:21:30.134312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134319] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.963 [2024-04-24 05:21:30.134336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.134357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.963 [2024-04-24 05:21:30.134524] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.963 [2024-04-24 05:21:30.134540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.963 [2024-04-24 05:21:30.134547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.963 [2024-04-24 05:21:30.134562] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:52.963 [2024-04-24 05:21:30.134579] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134588] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.963 [2024-04-24 05:21:30.134595] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.963 [2024-04-24 05:21:30.134609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.963 [2024-04-24 05:21:30.134639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.134763] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.134778] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.134786] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.134792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.134801] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:52.964 [2024-04-24 05:21:30.134810] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:52.964 [2024-04-24 05:21:30.134823] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:52.964 [2024-04-24 05:21:30.134933] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:52.964 [2024-04-24 05:21:30.134940] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:52.964 [2024-04-24 05:21:30.134954] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.134961] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.134968] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.134992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.964 [2024-04-24 05:21:30.135015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.135232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.135248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.135256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135262] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.135271] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:52.964 [2024-04-24 05:21:30.135288] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135297] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135304] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.135314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.964 [2024-04-24 05:21:30.135336] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.135504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.135519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.135526] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.135541] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:52.964 [2024-04-24 05:21:30.135549] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.135562] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:52.964 [2024-04-24 05:21:30.135579] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.135596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.135620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.964 [2024-04-24 05:21:30.135650] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.135850] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.964 [2024-04-24 05:21:30.135865] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.964 [2024-04-24 05:21:30.135873] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135879] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=4096, cccid=0 00:25:52.964 [2024-04-24 05:21:30.135887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0a860) on tqpair(0xda2130): expected_datao=0, payload_size=4096 00:25:52.964 [2024-04-24 05:21:30.135895] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135923] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.135934] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.177642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.177662] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.177670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.177677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.177690] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:52.964 [2024-04-24 05:21:30.177699] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:52.964 [2024-04-24 05:21:30.177707] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:52.964 [2024-04-24 05:21:30.177713] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:52.964 [2024-04-24 05:21:30.177721] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:52.964 [2024-04-24 05:21:30.177729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.177745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.177758] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.177765] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.177771] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.177783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:52.964 [2024-04-24 05:21:30.177807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.177964] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.177979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.177986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.177993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0a860) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.178010] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178018] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.178035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.964 [2024-04-24 05:21:30.178045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178052] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178058] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.178066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.964 [2024-04-24 05:21:30.178076] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178083] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178089] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.178097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.964 [2024-04-24 05:21:30.178107] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178114] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.178128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.964 [2024-04-24 05:21:30.178137] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.178157] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.178170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178177] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.964 [2024-04-24 05:21:30.178187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.964 [2024-04-24 05:21:30.178226] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a860, cid 0, qid 0 00:25:52.964 [2024-04-24 05:21:30.178238] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0a9c0, cid 1, qid 0 00:25:52.964 [2024-04-24 05:21:30.178246] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ab20, cid 2, qid 0 00:25:52.964 [2024-04-24 05:21:30.178253] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.964 [2024-04-24 05:21:30.178260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.964 [2024-04-24 05:21:30.178488] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.964 [2024-04-24 05:21:30.178501] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.964 [2024-04-24 05:21:30.178509] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.964 [2024-04-24 05:21:30.178515] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.964 [2024-04-24 05:21:30.178524] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:52.964 [2024-04-24 05:21:30.178533] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.178551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.178566] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:52.964 [2024-04-24 05:21:30.178578] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.178585] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.178591] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.178601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:52.965 [2024-04-24 05:21:30.178623] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.965 [2024-04-24 05:21:30.178779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.965 [2024-04-24 05:21:30.178794] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.965 [2024-04-24 05:21:30.178802] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.178808] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.965 [2024-04-24 05:21:30.178862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.178880] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.178894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.178901] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.178912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.965 [2024-04-24 05:21:30.178935] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.965 [2024-04-24 05:21:30.179114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.965 [2024-04-24 05:21:30.179130] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.965 [2024-04-24 05:21:30.179137] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179143] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=4096, cccid=4 00:25:52.965 [2024-04-24 05:21:30.179151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0ade0) on tqpair(0xda2130): expected_datao=0, payload_size=4096 00:25:52.965 [2024-04-24 05:21:30.179159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179169] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179177] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179195] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.965 [2024-04-24 05:21:30.179207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.965 [2024-04-24 05:21:30.179213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.965 [2024-04-24 05:21:30.179235] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:52.965 [2024-04-24 05:21:30.179257] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179274] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179287] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179294] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.179305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.965 [2024-04-24 05:21:30.179333] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.965 [2024-04-24 05:21:30.179479] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.965 [2024-04-24 05:21:30.179495] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.965 [2024-04-24 05:21:30.179502] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179508] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=4096, cccid=4 00:25:52.965 [2024-04-24 05:21:30.179516] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0ade0) on tqpair(0xda2130): expected_datao=0, payload_size=4096 00:25:52.965 [2024-04-24 05:21:30.179523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179533] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179541] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.965 [2024-04-24 05:21:30.179563] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.965 [2024-04-24 05:21:30.179570] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179577] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.965 [2024-04-24 05:21:30.179599] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179651] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179665] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.179676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.965 [2024-04-24 05:21:30.179699] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.965 [2024-04-24 05:21:30.179835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.965 [2024-04-24 05:21:30.179851] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.965 [2024-04-24 05:21:30.179858] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179864] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=4096, cccid=4 00:25:52.965 [2024-04-24 05:21:30.179872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0ade0) on tqpair(0xda2130): expected_datao=0, payload_size=4096 00:25:52.965 [2024-04-24 05:21:30.179880] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179890] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179898] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179918] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.965 [2024-04-24 05:21:30.179930] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.965 [2024-04-24 05:21:30.179937] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.179943] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.965 [2024-04-24 05:21:30.179957] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179972] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.179987] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.180002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.180011] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.180019] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:52.965 [2024-04-24 05:21:30.180027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:52.965 [2024-04-24 05:21:30.180035] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:52.965 [2024-04-24 05:21:30.180054] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.180063] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.180073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.965 [2024-04-24 05:21:30.180085] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.180091] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.965 [2024-04-24 05:21:30.180098] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda2130) 00:25:52.965 [2024-04-24 05:21:30.180107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.965 [2024-04-24 05:21:30.180132] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.965 [2024-04-24 05:21:30.180159] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0af40, cid 5, qid 0 00:25:52.965 [2024-04-24 05:21:30.180375] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.180390] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.180397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180404] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.180414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.180424] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.180430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180437] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0af40) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.180453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180461] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.180472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.180493] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0af40, cid 5, qid 0 00:25:52.966 [2024-04-24 05:21:30.180663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.180678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.180686] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180692] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0af40) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.180709] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.180729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.180755] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0af40, cid 5, qid 0 00:25:52.966 [2024-04-24 05:21:30.180872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.180887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.180894] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180901] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0af40) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.180917] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.180926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.180937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.180958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0af40, cid 5, qid 0 00:25:52.966 [2024-04-24 05:21:30.181073] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.181089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.181096] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181102] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0af40) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.181122] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181131] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.181142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.181154] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181161] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.181170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.181182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181188] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.181198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.181209] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xda2130) 00:25:52.966 [2024-04-24 05:21:30.181225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.966 [2024-04-24 05:21:30.181247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0af40, cid 5, qid 0 00:25:52.966 [2024-04-24 05:21:30.181258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ade0, cid 4, qid 0 00:25:52.966 [2024-04-24 05:21:30.181266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0b0a0, cid 6, qid 0 00:25:52.966 [2024-04-24 05:21:30.181274] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0b200, cid 7, qid 0 00:25:52.966 [2024-04-24 05:21:30.181501] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.966 [2024-04-24 05:21:30.181516] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.966 [2024-04-24 05:21:30.181523] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.181530] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=8192, cccid=5 00:25:52.966 [2024-04-24 05:21:30.181541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0af40) on tqpair(0xda2130): expected_datao=0, payload_size=8192 00:25:52.966 [2024-04-24 05:21:30.181549] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185657] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185671] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185681] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.966 [2024-04-24 05:21:30.185689] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.966 [2024-04-24 05:21:30.185696] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185702] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=512, cccid=4 00:25:52.966 [2024-04-24 05:21:30.185709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0ade0) on tqpair(0xda2130): expected_datao=0, payload_size=512 00:25:52.966 [2024-04-24 05:21:30.185716] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185726] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185733] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.966 [2024-04-24 05:21:30.185750] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.966 [2024-04-24 05:21:30.185756] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185762] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=512, cccid=6 00:25:52.966 [2024-04-24 05:21:30.185769] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0b0a0) on tqpair(0xda2130): expected_datao=0, payload_size=512 00:25:52.966 [2024-04-24 05:21:30.185776] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185785] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185792] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:52.966 [2024-04-24 05:21:30.185809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:52.966 [2024-04-24 05:21:30.185815] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185821] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda2130): datao=0, datal=4096, cccid=7 00:25:52.966 [2024-04-24 05:21:30.185829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe0b200) on tqpair(0xda2130): expected_datao=0, payload_size=4096 00:25:52.966 [2024-04-24 05:21:30.185835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185844] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185852] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.185869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.185875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185882] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0af40) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.185902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.185913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.185919] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185925] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ade0) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.185939] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.185964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.185970] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.185979] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0b0a0) on tqpair=0xda2130 00:25:52.966 [2024-04-24 05:21:30.185990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.966 [2024-04-24 05:21:30.185999] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.966 [2024-04-24 05:21:30.186005] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.966 [2024-04-24 05:21:30.186011] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0b200) on tqpair=0xda2130 00:25:52.966 ===================================================== 00:25:52.966 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.966 ===================================================== 00:25:52.966 Controller Capabilities/Features 00:25:52.966 ================================ 00:25:52.966 Vendor ID: 8086 00:25:52.966 Subsystem Vendor ID: 8086 00:25:52.966 Serial Number: SPDK00000000000001 00:25:52.966 Model Number: SPDK bdev Controller 00:25:52.966 Firmware Version: 24.05 00:25:52.966 Recommended Arb Burst: 6 00:25:52.966 IEEE OUI Identifier: e4 d2 5c 00:25:52.966 Multi-path I/O 00:25:52.966 May have multiple subsystem ports: Yes 00:25:52.966 May have multiple controllers: Yes 00:25:52.966 Associated with SR-IOV VF: No 00:25:52.966 Max Data Transfer Size: 131072 00:25:52.966 Max Number of Namespaces: 32 00:25:52.966 Max Number of I/O Queues: 127 00:25:52.966 NVMe Specification Version (VS): 1.3 00:25:52.966 NVMe Specification Version (Identify): 1.3 00:25:52.967 Maximum Queue Entries: 128 00:25:52.967 Contiguous Queues Required: Yes 00:25:52.967 Arbitration Mechanisms Supported 00:25:52.967 Weighted Round Robin: Not Supported 00:25:52.967 Vendor Specific: Not Supported 00:25:52.967 Reset Timeout: 15000 ms 00:25:52.967 Doorbell Stride: 4 bytes 00:25:52.967 NVM Subsystem Reset: Not Supported 00:25:52.967 Command Sets Supported 00:25:52.967 NVM Command Set: Supported 00:25:52.967 Boot Partition: Not Supported 00:25:52.967 Memory Page Size Minimum: 4096 bytes 00:25:52.967 Memory Page Size Maximum: 4096 bytes 00:25:52.967 Persistent Memory Region: Not Supported 00:25:52.967 Optional Asynchronous Events Supported 00:25:52.967 Namespace Attribute Notices: Supported 00:25:52.967 Firmware Activation Notices: Not Supported 00:25:52.967 ANA Change Notices: Not Supported 00:25:52.967 PLE Aggregate Log Change Notices: Not Supported 00:25:52.967 LBA Status Info Alert Notices: Not Supported 00:25:52.967 EGE Aggregate Log Change Notices: Not Supported 00:25:52.967 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.967 Zone Descriptor Change Notices: Not Supported 00:25:52.967 Discovery Log Change Notices: Not Supported 00:25:52.967 Controller Attributes 00:25:52.967 128-bit Host Identifier: Supported 00:25:52.967 Non-Operational Permissive Mode: Not Supported 00:25:52.967 NVM Sets: Not Supported 00:25:52.967 Read Recovery Levels: Not Supported 00:25:52.967 Endurance Groups: Not Supported 00:25:52.967 Predictable Latency Mode: Not Supported 00:25:52.967 Traffic Based Keep ALive: Not Supported 00:25:52.967 Namespace Granularity: Not Supported 00:25:52.967 SQ Associations: Not Supported 00:25:52.967 UUID List: Not Supported 00:25:52.967 Multi-Domain Subsystem: Not Supported 00:25:52.967 Fixed Capacity Management: Not Supported 00:25:52.967 Variable Capacity Management: Not Supported 00:25:52.967 Delete Endurance Group: Not Supported 00:25:52.967 Delete NVM Set: Not Supported 00:25:52.967 Extended LBA Formats Supported: Not Supported 00:25:52.967 Flexible Data Placement Supported: Not Supported 00:25:52.967 00:25:52.967 Controller Memory Buffer Support 00:25:52.967 ================================ 00:25:52.967 Supported: No 00:25:52.967 00:25:52.967 Persistent Memory Region Support 00:25:52.967 ================================ 00:25:52.967 Supported: No 00:25:52.967 00:25:52.967 Admin Command Set Attributes 00:25:52.967 ============================ 00:25:52.967 Security Send/Receive: Not Supported 00:25:52.967 Format NVM: Not Supported 00:25:52.967 Firmware Activate/Download: Not Supported 00:25:52.967 Namespace Management: Not Supported 00:25:52.967 Device Self-Test: Not Supported 00:25:52.967 Directives: Not Supported 00:25:52.967 NVMe-MI: Not Supported 00:25:52.967 Virtualization Management: Not Supported 00:25:52.967 Doorbell Buffer Config: Not Supported 00:25:52.967 Get LBA Status Capability: Not Supported 00:25:52.967 Command & Feature Lockdown Capability: Not Supported 00:25:52.967 Abort Command Limit: 4 00:25:52.967 Async Event Request Limit: 4 00:25:52.967 Number of Firmware Slots: N/A 00:25:52.967 Firmware Slot 1 Read-Only: N/A 00:25:52.967 Firmware Activation Without Reset: N/A 00:25:52.967 Multiple Update Detection Support: N/A 00:25:52.967 Firmware Update Granularity: No Information Provided 00:25:52.967 Per-Namespace SMART Log: No 00:25:52.967 Asymmetric Namespace Access Log Page: Not Supported 00:25:52.967 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:52.967 Command Effects Log Page: Supported 00:25:52.967 Get Log Page Extended Data: Supported 00:25:52.967 Telemetry Log Pages: Not Supported 00:25:52.967 Persistent Event Log Pages: Not Supported 00:25:52.967 Supported Log Pages Log Page: May Support 00:25:52.967 Commands Supported & Effects Log Page: Not Supported 00:25:52.967 Feature Identifiers & Effects Log Page:May Support 00:25:52.967 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.967 Data Area 4 for Telemetry Log: Not Supported 00:25:52.967 Error Log Page Entries Supported: 128 00:25:52.967 Keep Alive: Supported 00:25:52.967 Keep Alive Granularity: 10000 ms 00:25:52.967 00:25:52.967 NVM Command Set Attributes 00:25:52.967 ========================== 00:25:52.967 Submission Queue Entry Size 00:25:52.967 Max: 64 00:25:52.967 Min: 64 00:25:52.967 Completion Queue Entry Size 00:25:52.967 Max: 16 00:25:52.967 Min: 16 00:25:52.967 Number of Namespaces: 32 00:25:52.967 Compare Command: Supported 00:25:52.967 Write Uncorrectable Command: Not Supported 00:25:52.967 Dataset Management Command: Supported 00:25:52.967 Write Zeroes Command: Supported 00:25:52.967 Set Features Save Field: Not Supported 00:25:52.967 Reservations: Supported 00:25:52.967 Timestamp: Not Supported 00:25:52.967 Copy: Supported 00:25:52.967 Volatile Write Cache: Present 00:25:52.967 Atomic Write Unit (Normal): 1 00:25:52.967 Atomic Write Unit (PFail): 1 00:25:52.967 Atomic Compare & Write Unit: 1 00:25:52.967 Fused Compare & Write: Supported 00:25:52.967 Scatter-Gather List 00:25:52.967 SGL Command Set: Supported 00:25:52.967 SGL Keyed: Supported 00:25:52.967 SGL Bit Bucket Descriptor: Not Supported 00:25:52.967 SGL Metadata Pointer: Not Supported 00:25:52.967 Oversized SGL: Not Supported 00:25:52.967 SGL Metadata Address: Not Supported 00:25:52.967 SGL Offset: Supported 00:25:52.967 Transport SGL Data Block: Not Supported 00:25:52.967 Replay Protected Memory Block: Not Supported 00:25:52.967 00:25:52.967 Firmware Slot Information 00:25:52.967 ========================= 00:25:52.967 Active slot: 1 00:25:52.967 Slot 1 Firmware Revision: 24.05 00:25:52.967 00:25:52.967 00:25:52.967 Commands Supported and Effects 00:25:52.967 ============================== 00:25:52.967 Admin Commands 00:25:52.967 -------------- 00:25:52.967 Get Log Page (02h): Supported 00:25:52.967 Identify (06h): Supported 00:25:52.967 Abort (08h): Supported 00:25:52.967 Set Features (09h): Supported 00:25:52.967 Get Features (0Ah): Supported 00:25:52.967 Asynchronous Event Request (0Ch): Supported 00:25:52.967 Keep Alive (18h): Supported 00:25:52.967 I/O Commands 00:25:52.967 ------------ 00:25:52.967 Flush (00h): Supported LBA-Change 00:25:52.967 Write (01h): Supported LBA-Change 00:25:52.967 Read (02h): Supported 00:25:52.967 Compare (05h): Supported 00:25:52.967 Write Zeroes (08h): Supported LBA-Change 00:25:52.967 Dataset Management (09h): Supported LBA-Change 00:25:52.967 Copy (19h): Supported LBA-Change 00:25:52.967 Unknown (79h): Supported LBA-Change 00:25:52.967 Unknown (7Ah): Supported 00:25:52.967 00:25:52.967 Error Log 00:25:52.967 ========= 00:25:52.967 00:25:52.967 Arbitration 00:25:52.967 =========== 00:25:52.967 Arbitration Burst: 1 00:25:52.967 00:25:52.967 Power Management 00:25:52.967 ================ 00:25:52.967 Number of Power States: 1 00:25:52.967 Current Power State: Power State #0 00:25:52.967 Power State #0: 00:25:52.967 Max Power: 0.00 W 00:25:52.967 Non-Operational State: Operational 00:25:52.967 Entry Latency: Not Reported 00:25:52.967 Exit Latency: Not Reported 00:25:52.967 Relative Read Throughput: 0 00:25:52.967 Relative Read Latency: 0 00:25:52.967 Relative Write Throughput: 0 00:25:52.967 Relative Write Latency: 0 00:25:52.967 Idle Power: Not Reported 00:25:52.967 Active Power: Not Reported 00:25:52.967 Non-Operational Permissive Mode: Not Supported 00:25:52.967 00:25:52.967 Health Information 00:25:52.967 ================== 00:25:52.967 Critical Warnings: 00:25:52.967 Available Spare Space: OK 00:25:52.967 Temperature: OK 00:25:52.967 Device Reliability: OK 00:25:52.967 Read Only: No 00:25:52.967 Volatile Memory Backup: OK 00:25:52.967 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:52.967 Temperature Threshold: [2024-04-24 05:21:30.186147] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.967 [2024-04-24 05:21:30.186159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xda2130) 00:25:52.967 [2024-04-24 05:21:30.186171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.967 [2024-04-24 05:21:30.186195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0b200, cid 7, qid 0 00:25:52.967 [2024-04-24 05:21:30.186355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.967 [2024-04-24 05:21:30.186371] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.967 [2024-04-24 05:21:30.186378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.967 [2024-04-24 05:21:30.186385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0b200) on tqpair=0xda2130 00:25:52.967 [2024-04-24 05:21:30.186424] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:52.967 [2024-04-24 05:21:30.186445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.967 [2024-04-24 05:21:30.186457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.967 [2024-04-24 05:21:30.186466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.968 [2024-04-24 05:21:30.186476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.968 [2024-04-24 05:21:30.186488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186502] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.186513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.186536] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.186690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.186705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.186712] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186719] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.186730] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186744] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.186755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.186783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.186920] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.186935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.186942] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186952] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.186961] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:52.968 [2024-04-24 05:21:30.186969] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:52.968 [2024-04-24 05:21:30.186985] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.186994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.187011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.187032] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.187157] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.187173] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.187180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.187203] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187212] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187219] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.187229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.187250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.187362] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.187374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.187381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187388] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.187404] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187413] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187419] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.187430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.187451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.187561] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.187574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.187581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.187603] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187613] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187619] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.187637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.187660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.187776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.187792] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.187800] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.187822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187831] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.187838] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.187848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.187869] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.187990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.188005] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.188012] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188019] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.188035] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.188062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.188083] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.188196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.188209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.188216] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188223] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.188238] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188248] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188254] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.188264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.188285] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.188396] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.188409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.188416] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.188438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.188464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.188485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.188602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.188617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.188637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188653] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.188673] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188683] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.188700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.188722] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.188841] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.968 [2024-04-24 05:21:30.188856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.968 [2024-04-24 05:21:30.188863] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188870] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.968 [2024-04-24 05:21:30.188886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188896] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.968 [2024-04-24 05:21:30.188902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.968 [2024-04-24 05:21:30.188913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.968 [2024-04-24 05:21:30.188934] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.968 [2024-04-24 05:21:30.189057] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.969 [2024-04-24 05:21:30.189069] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.969 [2024-04-24 05:21:30.189076] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189083] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.969 [2024-04-24 05:21:30.189099] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189108] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.969 [2024-04-24 05:21:30.189124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.969 [2024-04-24 05:21:30.189145] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.969 [2024-04-24 05:21:30.189267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.969 [2024-04-24 05:21:30.189279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.969 [2024-04-24 05:21:30.189286] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189293] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.969 [2024-04-24 05:21:30.189308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189317] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189324] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.969 [2024-04-24 05:21:30.189334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.969 [2024-04-24 05:21:30.189355] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.969 [2024-04-24 05:21:30.189478] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.969 [2024-04-24 05:21:30.189491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.969 [2024-04-24 05:21:30.189498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189508] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.969 [2024-04-24 05:21:30.189525] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189534] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.189540] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.969 [2024-04-24 05:21:30.189551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.969 [2024-04-24 05:21:30.189572] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.969 [2024-04-24 05:21:30.193646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.969 [2024-04-24 05:21:30.193663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.969 [2024-04-24 05:21:30.193670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.193677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.969 [2024-04-24 05:21:30.193694] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.193718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.193724] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda2130) 00:25:52.969 [2024-04-24 05:21:30.193735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.969 [2024-04-24 05:21:30.193758] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe0ac80, cid 3, qid 0 00:25:52.969 [2024-04-24 05:21:30.193912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:52.969 [2024-04-24 05:21:30.193925] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:52.969 [2024-04-24 05:21:30.193932] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:52.969 [2024-04-24 05:21:30.193939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe0ac80) on tqpair=0xda2130 00:25:52.969 [2024-04-24 05:21:30.193952] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:52.969 0 Kelvin (-273 Celsius) 00:25:52.969 Available Spare: 0% 00:25:52.969 Available Spare Threshold: 0% 00:25:52.969 Life Percentage Used: 0% 00:25:52.969 Data Units Read: 0 00:25:52.969 Data Units Written: 0 00:25:52.969 Host Read Commands: 0 00:25:52.969 Host Write Commands: 0 00:25:52.969 Controller Busy Time: 0 minutes 00:25:52.969 Power Cycles: 0 00:25:52.969 Power On Hours: 0 hours 00:25:52.969 Unsafe Shutdowns: 0 00:25:52.969 Unrecoverable Media Errors: 0 00:25:52.969 Lifetime Error Log Entries: 0 00:25:52.969 Warning Temperature Time: 0 minutes 00:25:52.969 Critical Temperature Time: 0 minutes 00:25:52.969 00:25:52.969 Number of Queues 00:25:52.969 ================ 00:25:52.969 Number of I/O Submission Queues: 127 00:25:52.969 Number of I/O Completion Queues: 127 00:25:52.969 00:25:52.969 Active Namespaces 00:25:52.969 ================= 00:25:52.969 Namespace ID:1 00:25:52.969 Error Recovery Timeout: Unlimited 00:25:52.969 Command Set Identifier: NVM (00h) 00:25:52.969 Deallocate: Supported 00:25:52.969 Deallocated/Unwritten Error: Not Supported 00:25:52.969 Deallocated Read Value: Unknown 00:25:52.969 Deallocate in Write Zeroes: Not Supported 00:25:52.969 Deallocated Guard Field: 0xFFFF 00:25:52.969 Flush: Supported 00:25:52.969 Reservation: Supported 00:25:52.969 Namespace Sharing Capabilities: Multiple Controllers 00:25:52.969 Size (in LBAs): 131072 (0GiB) 00:25:52.969 Capacity (in LBAs): 131072 (0GiB) 00:25:52.969 Utilization (in LBAs): 131072 (0GiB) 00:25:52.969 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:52.969 EUI64: ABCDEF0123456789 00:25:52.969 UUID: 154cec22-80fb-48a7-8259-12e9309fbe56 00:25:52.969 Thin Provisioning: Not Supported 00:25:52.969 Per-NS Atomic Units: Yes 00:25:52.969 Atomic Boundary Size (Normal): 0 00:25:52.969 Atomic Boundary Size (PFail): 0 00:25:52.969 Atomic Boundary Offset: 0 00:25:52.969 Maximum Single Source Range Length: 65535 00:25:52.969 Maximum Copy Length: 65535 00:25:52.969 Maximum Source Range Count: 1 00:25:52.969 NGUID/EUI64 Never Reused: No 00:25:52.969 Namespace Write Protected: No 00:25:52.969 Number of LBA Formats: 1 00:25:52.969 Current LBA Format: LBA Format #00 00:25:52.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:52.969 00:25:52.969 05:21:30 -- host/identify.sh@51 -- # sync 00:25:52.969 05:21:30 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.969 05:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.969 05:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:52.969 05:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.969 05:21:30 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:52.969 05:21:30 -- host/identify.sh@56 -- # nvmftestfini 00:25:52.969 05:21:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:52.969 05:21:30 -- nvmf/common.sh@117 -- # sync 00:25:52.969 05:21:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.969 05:21:30 -- nvmf/common.sh@120 -- # set +e 00:25:52.969 05:21:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.969 05:21:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.969 rmmod nvme_tcp 00:25:53.228 rmmod nvme_fabrics 00:25:53.228 rmmod nvme_keyring 00:25:53.228 05:21:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.228 05:21:30 -- nvmf/common.sh@124 -- # set -e 00:25:53.228 05:21:30 -- nvmf/common.sh@125 -- # return 0 00:25:53.228 05:21:30 -- nvmf/common.sh@478 -- # '[' -n 1966151 ']' 00:25:53.228 05:21:30 -- nvmf/common.sh@479 -- # killprocess 1966151 00:25:53.228 05:21:30 -- common/autotest_common.sh@936 -- # '[' -z 1966151 ']' 00:25:53.228 05:21:30 -- common/autotest_common.sh@940 -- # kill -0 1966151 00:25:53.228 05:21:30 -- common/autotest_common.sh@941 -- # uname 00:25:53.228 05:21:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:53.228 05:21:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1966151 00:25:53.228 05:21:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:53.228 05:21:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:53.228 05:21:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1966151' 00:25:53.228 killing process with pid 1966151 00:25:53.228 05:21:30 -- common/autotest_common.sh@955 -- # kill 1966151 00:25:53.228 [2024-04-24 05:21:30.308313] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:53.228 05:21:30 -- common/autotest_common.sh@960 -- # wait 1966151 00:25:53.489 05:21:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:53.489 05:21:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:53.489 05:21:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:53.489 05:21:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.489 05:21:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.489 05:21:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.489 05:21:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.489 05:21:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.397 05:21:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:55.397 00:25:55.397 real 0m5.490s 00:25:55.397 user 0m4.406s 00:25:55.397 sys 0m1.915s 00:25:55.397 05:21:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:55.397 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:55.397 ************************************ 00:25:55.397 END TEST nvmf_identify 00:25:55.397 ************************************ 00:25:55.397 05:21:32 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:55.397 05:21:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:55.397 05:21:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:55.397 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 ************************************ 00:25:55.656 START TEST nvmf_perf 00:25:55.656 ************************************ 00:25:55.656 05:21:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:55.656 * Looking for test storage... 00:25:55.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.656 05:21:32 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.657 05:21:32 -- nvmf/common.sh@7 -- # uname -s 00:25:55.657 05:21:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.657 05:21:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.657 05:21:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.657 05:21:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.657 05:21:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.657 05:21:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.657 05:21:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.657 05:21:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.657 05:21:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.657 05:21:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.657 05:21:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.657 05:21:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.657 05:21:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.657 05:21:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.657 05:21:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.657 05:21:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.657 05:21:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.657 05:21:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.657 05:21:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.657 05:21:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.657 05:21:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.657 05:21:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.657 05:21:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.657 05:21:32 -- paths/export.sh@5 -- # export PATH 00:25:55.657 05:21:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.657 05:21:32 -- nvmf/common.sh@47 -- # : 0 00:25:55.657 05:21:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.657 05:21:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.657 05:21:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.657 05:21:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.657 05:21:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.657 05:21:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.657 05:21:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.657 05:21:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.657 05:21:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:55.657 05:21:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:55.657 05:21:32 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:55.657 05:21:32 -- host/perf.sh@17 -- # nvmftestinit 00:25:55.657 05:21:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:55.657 05:21:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.657 05:21:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:55.657 05:21:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:55.657 05:21:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:55.657 05:21:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.657 05:21:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.657 05:21:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.657 05:21:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:55.657 05:21:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:55.657 05:21:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:55.657 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:57.565 05:21:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:57.565 05:21:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.565 05:21:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.565 05:21:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.565 05:21:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.565 05:21:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.565 05:21:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.565 05:21:34 -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.565 05:21:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.565 05:21:34 -- nvmf/common.sh@296 -- # e810=() 00:25:57.565 05:21:34 -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.565 05:21:34 -- nvmf/common.sh@297 -- # x722=() 00:25:57.565 05:21:34 -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.565 05:21:34 -- nvmf/common.sh@298 -- # mlx=() 00:25:57.565 05:21:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.565 05:21:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.565 05:21:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.565 05:21:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:57.565 05:21:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.565 05:21:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.565 05:21:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.565 05:21:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.565 05:21:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.565 05:21:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.565 05:21:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.565 05:21:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.565 05:21:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.565 05:21:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.565 05:21:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.565 05:21:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.565 05:21:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.565 05:21:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.565 05:21:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:57.565 05:21:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:57.565 05:21:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:57.565 05:21:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.565 05:21:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.565 05:21:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.565 05:21:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.565 05:21:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.565 05:21:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.565 05:21:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.566 05:21:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.566 05:21:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.566 05:21:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.566 05:21:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.566 05:21:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.566 05:21:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.566 05:21:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.566 05:21:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.566 05:21:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.566 05:21:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.566 05:21:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.566 05:21:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.566 05:21:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:25:57.566 00:25:57.566 --- 10.0.0.2 ping statistics --- 00:25:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.566 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:57.566 05:21:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:25:57.566 00:25:57.566 --- 10.0.0.1 ping statistics --- 00:25:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.566 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:57.566 05:21:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.566 05:21:34 -- nvmf/common.sh@411 -- # return 0 00:25:57.566 05:21:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:57.566 05:21:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.566 05:21:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:57.566 05:21:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:57.566 05:21:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.566 05:21:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:57.566 05:21:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:57.566 05:21:34 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:57.566 05:21:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:57.566 05:21:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:57.566 05:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:57.566 05:21:34 -- nvmf/common.sh@470 -- # nvmfpid=1968170 00:25:57.566 05:21:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.566 05:21:34 -- nvmf/common.sh@471 -- # waitforlisten 1968170 00:25:57.566 05:21:34 -- common/autotest_common.sh@817 -- # '[' -z 1968170 ']' 00:25:57.566 05:21:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.566 05:21:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:57.566 05:21:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.566 05:21:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:57.566 05:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:57.823 [2024-04-24 05:21:34.872412] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:25:57.823 [2024-04-24 05:21:34.872488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.823 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.823 [2024-04-24 05:21:34.910146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:57.823 [2024-04-24 05:21:34.942043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.823 [2024-04-24 05:21:35.033374] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.823 [2024-04-24 05:21:35.033429] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.824 [2024-04-24 05:21:35.033444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.824 [2024-04-24 05:21:35.033456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.824 [2024-04-24 05:21:35.033467] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.824 [2024-04-24 05:21:35.033554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.824 [2024-04-24 05:21:35.033588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.824 [2024-04-24 05:21:35.033638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.824 [2024-04-24 05:21:35.033641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.081 05:21:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:58.081 05:21:35 -- common/autotest_common.sh@850 -- # return 0 00:25:58.081 05:21:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:58.081 05:21:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:58.081 05:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:58.081 05:21:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.081 05:21:35 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:58.081 05:21:35 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:01.363 05:21:38 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:01.363 05:21:38 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:01.363 05:21:38 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:26:01.363 05:21:38 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:01.620 05:21:38 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:01.621 05:21:38 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:26:01.621 05:21:38 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:01.621 05:21:38 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:01.621 05:21:38 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:01.878 [2024-04-24 05:21:39.056655] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.878 05:21:39 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:02.136 05:21:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:02.136 05:21:39 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.393 05:21:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:02.393 05:21:39 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:02.650 05:21:39 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.908 [2024-04-24 05:21:40.044212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.908 05:21:40 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:03.165 05:21:40 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:26:03.165 05:21:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:26:03.165 05:21:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:03.165 05:21:40 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:26:04.536 Initializing NVMe Controllers 00:26:04.536 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:26:04.536 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:26:04.536 Initialization complete. Launching workers. 00:26:04.536 ======================================================== 00:26:04.536 Latency(us) 00:26:04.536 Device Information : IOPS MiB/s Average min max 00:26:04.536 PCIE (0000:88:00.0) NSID 1 from core 0: 86298.61 337.10 370.16 31.46 6246.43 00:26:04.536 ======================================================== 00:26:04.536 Total : 86298.61 337.10 370.16 31.46 6246.43 00:26:04.536 00:26:04.536 05:21:41 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.536 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.910 Initializing NVMe Controllers 00:26:05.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.910 Initialization complete. Launching workers. 00:26:05.910 ======================================================== 00:26:05.910 Latency(us) 00:26:05.910 Device Information : IOPS MiB/s Average min max 00:26:05.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.67 0.37 10687.51 178.24 45706.09 00:26:05.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.77 0.26 15204.25 7948.00 47899.94 00:26:05.910 ======================================================== 00:26:05.910 Total : 159.44 0.62 12550.66 178.24 47899.94 00:26:05.910 00:26:05.910 05:21:42 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:05.910 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.847 Initializing NVMe Controllers 00:26:06.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:06.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:06.847 Initialization complete. Launching workers. 00:26:06.847 ======================================================== 00:26:06.847 Latency(us) 00:26:06.847 Device Information : IOPS MiB/s Average min max 00:26:06.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8326.98 32.53 3855.08 385.58 7946.43 00:26:06.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.99 15.14 8294.92 4435.72 17930.14 00:26:06.847 ======================================================== 00:26:06.847 Total : 12201.98 47.66 5265.05 385.58 17930.14 00:26:06.847 00:26:06.847 05:21:44 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:06.847 05:21:44 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:06.847 05:21:44 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:06.847 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.377 Initializing NVMe Controllers 00:26:09.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.377 Controller IO queue size 128, less than required. 00:26:09.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.377 Controller IO queue size 128, less than required. 00:26:09.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:09.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:09.377 Initialization complete. Launching workers. 00:26:09.377 ======================================================== 00:26:09.377 Latency(us) 00:26:09.377 Device Information : IOPS MiB/s Average min max 00:26:09.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 986.99 246.75 132668.56 84739.52 206310.22 00:26:09.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.99 150.00 224640.25 110495.68 321647.20 00:26:09.377 ======================================================== 00:26:09.377 Total : 1586.98 396.75 167440.47 84739.52 321647.20 00:26:09.377 00:26:09.377 05:21:46 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:09.377 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.635 No valid NVMe controllers or AIO or URING devices found 00:26:09.635 Initializing NVMe Controllers 00:26:09.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.635 Controller IO queue size 128, less than required. 00:26:09.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.635 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:09.635 Controller IO queue size 128, less than required. 00:26:09.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.635 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:09.635 WARNING: Some requested NVMe devices were skipped 00:26:09.635 05:21:46 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:09.635 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.928 Initializing NVMe Controllers 00:26:12.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.928 Controller IO queue size 128, less than required. 00:26:12.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.928 Controller IO queue size 128, less than required. 00:26:12.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:12.928 Initialization complete. Launching workers. 00:26:12.928 00:26:12.928 ==================== 00:26:12.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:12.928 TCP transport: 00:26:12.928 polls: 21724 00:26:12.928 idle_polls: 9402 00:26:12.928 sock_completions: 12322 00:26:12.928 nvme_completions: 4963 00:26:12.928 submitted_requests: 7416 00:26:12.928 queued_requests: 1 00:26:12.928 00:26:12.928 ==================== 00:26:12.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:12.928 TCP transport: 00:26:12.928 polls: 22821 00:26:12.928 idle_polls: 10005 00:26:12.928 sock_completions: 12816 00:26:12.928 nvme_completions: 4841 00:26:12.928 submitted_requests: 7242 00:26:12.928 queued_requests: 1 00:26:12.928 ======================================================== 00:26:12.928 Latency(us) 00:26:12.928 Device Information : IOPS MiB/s Average min max 00:26:12.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1239.21 309.80 105746.88 65379.02 172190.22 00:26:12.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1208.74 302.19 108353.24 53393.05 175755.91 00:26:12.928 ======================================================== 00:26:12.928 Total : 2447.95 611.99 107033.84 53393.05 175755.91 00:26:12.928 00:26:12.928 05:21:49 -- host/perf.sh@66 -- # sync 00:26:12.928 05:21:49 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.928 05:21:49 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:12.928 05:21:49 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:26:12.928 05:21:49 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:16.222 05:21:53 -- host/perf.sh@72 -- # ls_guid=14d6b8ec-9dca-4306-bd42-f074a5bec8b1 00:26:16.222 05:21:53 -- host/perf.sh@73 -- # get_lvs_free_mb 14d6b8ec-9dca-4306-bd42-f074a5bec8b1 00:26:16.222 05:21:53 -- common/autotest_common.sh@1350 -- # local lvs_uuid=14d6b8ec-9dca-4306-bd42-f074a5bec8b1 00:26:16.222 05:21:53 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:16.222 05:21:53 -- common/autotest_common.sh@1352 -- # local fc 00:26:16.222 05:21:53 -- common/autotest_common.sh@1353 -- # local cs 00:26:16.222 05:21:53 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:16.222 05:21:53 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:16.222 { 00:26:16.222 "uuid": "14d6b8ec-9dca-4306-bd42-f074a5bec8b1", 00:26:16.222 "name": "lvs_0", 00:26:16.222 "base_bdev": "Nvme0n1", 00:26:16.222 "total_data_clusters": 238234, 00:26:16.222 "free_clusters": 238234, 00:26:16.222 "block_size": 512, 00:26:16.222 "cluster_size": 4194304 00:26:16.222 } 00:26:16.222 ]' 00:26:16.222 05:21:53 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="14d6b8ec-9dca-4306-bd42-f074a5bec8b1") .free_clusters' 00:26:16.222 05:21:53 -- common/autotest_common.sh@1355 -- # fc=238234 00:26:16.222 05:21:53 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="14d6b8ec-9dca-4306-bd42-f074a5bec8b1") .cluster_size' 00:26:16.222 05:21:53 -- common/autotest_common.sh@1356 -- # cs=4194304 00:26:16.222 05:21:53 -- common/autotest_common.sh@1359 -- # free_mb=952936 00:26:16.222 05:21:53 -- common/autotest_common.sh@1360 -- # echo 952936 00:26:16.222 952936 00:26:16.222 05:21:53 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:16.222 05:21:53 -- host/perf.sh@78 -- # free_mb=20480 00:26:16.222 05:21:53 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14d6b8ec-9dca-4306-bd42-f074a5bec8b1 lbd_0 20480 00:26:16.791 05:21:54 -- host/perf.sh@80 -- # lb_guid=b5289a92-a17f-4792-b9a6-b6e1e0e0f934 00:26:16.791 05:21:54 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b5289a92-a17f-4792-b9a6-b6e1e0e0f934 lvs_n_0 00:26:17.725 05:21:54 -- host/perf.sh@83 -- # ls_nested_guid=c69b48bc-4e9f-405f-87f4-bfbe1b107aa7 00:26:17.725 05:21:54 -- host/perf.sh@84 -- # get_lvs_free_mb c69b48bc-4e9f-405f-87f4-bfbe1b107aa7 00:26:17.725 05:21:54 -- common/autotest_common.sh@1350 -- # local lvs_uuid=c69b48bc-4e9f-405f-87f4-bfbe1b107aa7 00:26:17.725 05:21:54 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:17.725 05:21:54 -- common/autotest_common.sh@1352 -- # local fc 00:26:17.725 05:21:54 -- common/autotest_common.sh@1353 -- # local cs 00:26:17.725 05:21:54 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:17.982 05:21:55 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:17.982 { 00:26:17.982 "uuid": "14d6b8ec-9dca-4306-bd42-f074a5bec8b1", 00:26:17.982 "name": "lvs_0", 00:26:17.982 "base_bdev": "Nvme0n1", 00:26:17.982 "total_data_clusters": 238234, 00:26:17.982 "free_clusters": 233114, 00:26:17.982 "block_size": 512, 00:26:17.982 "cluster_size": 4194304 00:26:17.982 }, 00:26:17.982 { 00:26:17.982 "uuid": "c69b48bc-4e9f-405f-87f4-bfbe1b107aa7", 00:26:17.982 "name": "lvs_n_0", 00:26:17.982 "base_bdev": "b5289a92-a17f-4792-b9a6-b6e1e0e0f934", 00:26:17.982 "total_data_clusters": 5114, 00:26:17.982 "free_clusters": 5114, 00:26:17.982 "block_size": 512, 00:26:17.982 "cluster_size": 4194304 00:26:17.982 } 00:26:17.982 ]' 00:26:17.982 05:21:55 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="c69b48bc-4e9f-405f-87f4-bfbe1b107aa7") .free_clusters' 00:26:17.982 05:21:55 -- common/autotest_common.sh@1355 -- # fc=5114 00:26:17.982 05:21:55 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="c69b48bc-4e9f-405f-87f4-bfbe1b107aa7") .cluster_size' 00:26:17.982 05:21:55 -- common/autotest_common.sh@1356 -- # cs=4194304 00:26:17.982 05:21:55 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:26:17.982 05:21:55 -- common/autotest_common.sh@1360 -- # echo 20456 00:26:17.982 20456 00:26:17.982 05:21:55 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:17.982 05:21:55 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c69b48bc-4e9f-405f-87f4-bfbe1b107aa7 lbd_nest_0 20456 00:26:18.240 05:21:55 -- host/perf.sh@88 -- # lb_nested_guid=7100e23e-4505-4651-97b9-7f1dd0f8973d 00:26:18.240 05:21:55 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.498 05:21:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:18.498 05:21:55 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7100e23e-4505-4651-97b9-7f1dd0f8973d 00:26:18.756 05:21:55 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.013 05:21:56 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:19.013 05:21:56 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:19.013 05:21:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:19.013 05:21:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:19.013 05:21:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.013 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.225 Initializing NVMe Controllers 00:26:31.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:31.225 Initialization complete. Launching workers. 00:26:31.225 ======================================================== 00:26:31.225 Latency(us) 00:26:31.225 Device Information : IOPS MiB/s Average min max 00:26:31.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.19 0.02 19956.56 210.34 46067.58 00:26:31.225 ======================================================== 00:26:31.225 Total : 50.19 0.02 19956.56 210.34 46067.58 00:26:31.225 00:26:31.225 05:22:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:31.225 05:22:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:31.225 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.199 Initializing NVMe Controllers 00:26:41.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.199 Initialization complete. Launching workers. 00:26:41.199 ======================================================== 00:26:41.199 Latency(us) 00:26:41.199 Device Information : IOPS MiB/s Average min max 00:26:41.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.37 9.05 13816.49 5014.30 48638.69 00:26:41.199 ======================================================== 00:26:41.199 Total : 72.37 9.05 13816.49 5014.30 48638.69 00:26:41.199 00:26:41.199 05:22:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:41.199 05:22:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:41.199 05:22:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.199 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.205 Initializing NVMe Controllers 00:26:51.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.205 Initialization complete. Launching workers. 00:26:51.205 ======================================================== 00:26:51.205 Latency(us) 00:26:51.205 Device Information : IOPS MiB/s Average min max 00:26:51.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7118.98 3.48 4498.79 288.96 46124.73 00:26:51.205 ======================================================== 00:26:51.205 Total : 7118.98 3.48 4498.79 288.96 46124.73 00:26:51.205 00:26:51.205 05:22:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:51.205 05:22:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:51.205 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.182 Initializing NVMe Controllers 00:27:01.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:01.182 Initialization complete. Launching workers. 00:27:01.182 ======================================================== 00:27:01.182 Latency(us) 00:27:01.182 Device Information : IOPS MiB/s Average min max 00:27:01.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2272.96 284.12 14089.31 620.75 30837.48 00:27:01.182 ======================================================== 00:27:01.182 Total : 2272.96 284.12 14089.31 620.75 30837.48 00:27:01.182 00:27:01.182 05:22:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:01.182 05:22:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:01.182 05:22:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.182 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.161 Initializing NVMe Controllers 00:27:11.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.161 Controller IO queue size 128, less than required. 00:27:11.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:11.161 Initialization complete. Launching workers. 00:27:11.161 ======================================================== 00:27:11.161 Latency(us) 00:27:11.161 Device Information : IOPS MiB/s Average min max 00:27:11.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10653.28 5.20 12021.04 1686.27 24719.08 00:27:11.161 ======================================================== 00:27:11.161 Total : 10653.28 5.20 12021.04 1686.27 24719.08 00:27:11.161 00:27:11.161 05:22:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:11.161 05:22:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.161 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.159 Initializing NVMe Controllers 00:27:21.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.159 Controller IO queue size 128, less than required. 00:27:21.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:21.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:21.160 Initialization complete. Launching workers. 00:27:21.160 ======================================================== 00:27:21.160 Latency(us) 00:27:21.160 Device Information : IOPS MiB/s Average min max 00:27:21.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.45 147.43 109406.17 16298.09 245523.73 00:27:21.160 ======================================================== 00:27:21.160 Total : 1179.45 147.43 109406.17 16298.09 245523.73 00:27:21.160 00:27:21.160 05:22:58 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.418 05:22:58 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7100e23e-4505-4651-97b9-7f1dd0f8973d 00:27:22.354 05:22:59 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:22.354 05:22:59 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b5289a92-a17f-4792-b9a6-b6e1e0e0f934 00:27:22.612 05:22:59 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:22.870 05:23:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:22.870 05:23:00 -- host/perf.sh@114 -- # nvmftestfini 00:27:22.870 05:23:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:22.870 05:23:00 -- nvmf/common.sh@117 -- # sync 00:27:22.870 05:23:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.870 05:23:00 -- nvmf/common.sh@120 -- # set +e 00:27:22.870 05:23:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.870 05:23:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.870 rmmod nvme_tcp 00:27:23.128 rmmod nvme_fabrics 00:27:23.128 rmmod nvme_keyring 00:27:23.128 05:23:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.128 05:23:00 -- nvmf/common.sh@124 -- # set -e 00:27:23.128 05:23:00 -- nvmf/common.sh@125 -- # return 0 00:27:23.128 05:23:00 -- nvmf/common.sh@478 -- # '[' -n 1968170 ']' 00:27:23.128 05:23:00 -- nvmf/common.sh@479 -- # killprocess 1968170 00:27:23.128 05:23:00 -- common/autotest_common.sh@936 -- # '[' -z 1968170 ']' 00:27:23.128 05:23:00 -- common/autotest_common.sh@940 -- # kill -0 1968170 00:27:23.128 05:23:00 -- common/autotest_common.sh@941 -- # uname 00:27:23.128 05:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.128 05:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1968170 00:27:23.128 05:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:23.128 05:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:23.128 05:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1968170' 00:27:23.128 killing process with pid 1968170 00:27:23.128 05:23:00 -- common/autotest_common.sh@955 -- # kill 1968170 00:27:23.128 05:23:00 -- common/autotest_common.sh@960 -- # wait 1968170 00:27:25.033 05:23:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:25.033 05:23:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:25.033 05:23:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:25.033 05:23:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.033 05:23:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.033 05:23:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.033 05:23:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.033 05:23:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.015 05:23:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.015 00:27:27.015 real 1m31.201s 00:27:27.015 user 5m31.630s 00:27:27.015 sys 0m17.044s 00:27:27.016 05:23:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:27.016 05:23:03 -- common/autotest_common.sh@10 -- # set +x 00:27:27.016 ************************************ 00:27:27.016 END TEST nvmf_perf 00:27:27.016 ************************************ 00:27:27.016 05:23:03 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:27.016 05:23:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:27.016 05:23:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:27.016 05:23:03 -- common/autotest_common.sh@10 -- # set +x 00:27:27.016 ************************************ 00:27:27.016 START TEST nvmf_fio_host 00:27:27.016 ************************************ 00:27:27.016 05:23:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:27.016 * Looking for test storage... 00:27:27.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.016 05:23:04 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.016 05:23:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.016 05:23:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.016 05:23:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.016 05:23:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@5 -- # export PATH 00:27:27.016 05:23:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.016 05:23:04 -- nvmf/common.sh@7 -- # uname -s 00:27:27.016 05:23:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.016 05:23:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.016 05:23:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.016 05:23:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.016 05:23:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.016 05:23:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.016 05:23:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.016 05:23:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.016 05:23:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.016 05:23:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.016 05:23:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.016 05:23:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.016 05:23:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.016 05:23:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.016 05:23:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.016 05:23:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.016 05:23:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.016 05:23:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.016 05:23:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.016 05:23:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.016 05:23:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- paths/export.sh@5 -- # export PATH 00:27:27.016 05:23:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.016 05:23:04 -- nvmf/common.sh@47 -- # : 0 00:27:27.016 05:23:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.016 05:23:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.016 05:23:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.016 05:23:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.016 05:23:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.016 05:23:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.016 05:23:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.016 05:23:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.016 05:23:04 -- host/fio.sh@12 -- # nvmftestinit 00:27:27.016 05:23:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:27.016 05:23:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.016 05:23:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:27.016 05:23:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:27.016 05:23:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:27.016 05:23:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.016 05:23:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.016 05:23:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.016 05:23:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:27.016 05:23:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:27.016 05:23:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.016 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:27:28.918 05:23:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:28.918 05:23:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.918 05:23:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.918 05:23:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.918 05:23:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.918 05:23:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.918 05:23:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.918 05:23:06 -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.918 05:23:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.918 05:23:06 -- nvmf/common.sh@296 -- # e810=() 00:27:28.918 05:23:06 -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.918 05:23:06 -- nvmf/common.sh@297 -- # x722=() 00:27:28.918 05:23:06 -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.918 05:23:06 -- nvmf/common.sh@298 -- # mlx=() 00:27:28.918 05:23:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.918 05:23:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.918 05:23:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.918 05:23:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.918 05:23:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.918 05:23:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.918 05:23:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:28.918 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:28.918 05:23:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.918 05:23:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.919 05:23:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:28.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:28.919 05:23:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.919 05:23:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.919 05:23:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.919 05:23:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:28.919 05:23:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.919 05:23:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:28.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:28.919 05:23:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.919 05:23:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.919 05:23:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.919 05:23:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:28.919 05:23:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.919 05:23:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:28.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:28.919 05:23:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.919 05:23:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:28.919 05:23:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:28.919 05:23:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:28.919 05:23:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.919 05:23:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.919 05:23:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.919 05:23:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.919 05:23:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.919 05:23:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.919 05:23:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.919 05:23:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.919 05:23:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.919 05:23:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.919 05:23:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.919 05:23:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.919 05:23:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.919 05:23:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.919 05:23:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.919 05:23:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.919 05:23:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.919 05:23:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.919 05:23:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.919 05:23:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:27:28.919 00:27:28.919 --- 10.0.0.2 ping statistics --- 00:27:28.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.919 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:27:28.919 05:23:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:27:28.919 00:27:28.919 --- 10.0.0.1 ping statistics --- 00:27:28.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.919 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:28.919 05:23:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.919 05:23:06 -- nvmf/common.sh@411 -- # return 0 00:27:28.919 05:23:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:28.919 05:23:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.919 05:23:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:28.919 05:23:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.919 05:23:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:28.919 05:23:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:28.919 05:23:06 -- host/fio.sh@14 -- # [[ y != y ]] 00:27:28.919 05:23:06 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:27:28.919 05:23:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:28.919 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:28.919 05:23:06 -- host/fio.sh@22 -- # nvmfpid=1980269 00:27:28.919 05:23:06 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:28.919 05:23:06 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.919 05:23:06 -- host/fio.sh@26 -- # waitforlisten 1980269 00:27:28.919 05:23:06 -- common/autotest_common.sh@817 -- # '[' -z 1980269 ']' 00:27:28.919 05:23:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.919 05:23:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:28.919 05:23:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.919 05:23:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:28.919 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.180 [2024-04-24 05:23:06.214271] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:27:29.180 [2024-04-24 05:23:06.214356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.180 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.180 [2024-04-24 05:23:06.252746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:29.180 [2024-04-24 05:23:06.279351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.180 [2024-04-24 05:23:06.361350] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.180 [2024-04-24 05:23:06.361404] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.180 [2024-04-24 05:23:06.361432] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.180 [2024-04-24 05:23:06.361444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.180 [2024-04-24 05:23:06.361454] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.180 [2024-04-24 05:23:06.361515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.180 [2024-04-24 05:23:06.361574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.180 [2024-04-24 05:23:06.361603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.180 [2024-04-24 05:23:06.361605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.441 05:23:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:29.441 05:23:06 -- common/autotest_common.sh@850 -- # return 0 00:27:29.441 05:23:06 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 [2024-04-24 05:23:06.492394] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:27:29.441 05:23:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 05:23:06 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 Malloc1 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 [2024-04-24 05:23:06.574181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.441 05:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.441 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.441 05:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.441 05:23:06 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:29.441 05:23:06 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.441 05:23:06 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.441 05:23:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:29.441 05:23:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:29.441 05:23:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:29.441 05:23:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.441 05:23:06 -- common/autotest_common.sh@1327 -- # shift 00:27:29.441 05:23:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:29.441 05:23:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:29.441 05:23:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:29.441 05:23:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:29.441 05:23:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:29.441 05:23:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:29.441 05:23:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:29.441 05:23:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.702 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:29.702 fio-3.35 00:27:29.702 Starting 1 thread 00:27:29.702 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.234 00:27:32.234 test: (groupid=0, jobs=1): err= 0: pid=1980488: Wed Apr 24 05:23:09 2024 00:27:32.234 read: IOPS=8846, BW=34.6MiB/s (36.2MB/s)(69.3MiB/2006msec) 00:27:32.234 slat (usec): min=2, max=160, avg= 2.78, stdev= 1.91 00:27:32.234 clat (usec): min=3312, max=13618, avg=7991.09, stdev=584.34 00:27:32.234 lat (usec): min=3336, max=13621, avg=7993.87, stdev=584.25 00:27:32.234 clat percentiles (usec): 00:27:32.234 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7504], 00:27:32.234 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:27:32.234 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:27:32.234 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[13042], 00:27:32.234 | 99.99th=[13566] 00:27:32.234 bw ( KiB/s): min=34427, max=35920, per=99.88%, avg=35344.75, stdev=640.80, samples=4 00:27:32.234 iops : min= 8606, max= 8980, avg=8836.00, stdev=160.56, samples=4 00:27:32.234 write: IOPS=8865, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2006msec); 0 zone resets 00:27:32.234 slat (usec): min=2, max=151, avg= 2.96, stdev= 1.50 00:27:32.234 clat (usec): min=1447, max=12016, avg=6423.32, stdev=516.53 00:27:32.234 lat (usec): min=1457, max=12019, avg=6426.28, stdev=516.50 00:27:32.234 clat percentiles (usec): 00:27:32.234 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:27:32.234 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:27:32.234 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:27:32.234 | 99.00th=[ 7504], 99.50th=[ 7570], 99.90th=[10552], 99.95th=[11469], 00:27:32.234 | 99.99th=[11994] 00:27:32.234 bw ( KiB/s): min=35200, max=35712, per=99.91%, avg=35428.25, stdev=259.16, samples=4 00:27:32.234 iops : min= 8800, max= 8928, avg=8857.00, stdev=64.86, samples=4 00:27:32.234 lat (msec) : 2=0.03%, 4=0.11%, 10=99.69%, 20=0.18% 00:27:32.234 cpu : usr=59.70%, sys=35.01%, ctx=62, majf=0, minf=39 00:27:32.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:32.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:32.234 issued rwts: total=17747,17784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:32.234 00:27:32.234 Run status group 0 (all jobs): 00:27:32.234 READ: bw=34.6MiB/s (36.2MB/s), 34.6MiB/s-34.6MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2006-2006msec 00:27:32.234 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.8MB), run=2006-2006msec 00:27:32.234 05:23:09 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.234 05:23:09 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.234 05:23:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:32.234 05:23:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.234 05:23:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:32.234 05:23:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.234 05:23:09 -- common/autotest_common.sh@1327 -- # shift 00:27:32.234 05:23:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:32.234 05:23:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.234 05:23:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.234 05:23:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.234 05:23:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.234 05:23:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.234 05:23:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:32.234 05:23:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.234 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:32.234 fio-3.35 00:27:32.234 Starting 1 thread 00:27:32.234 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.168 [2024-04-24 05:23:10.348685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871b60 is same with the state(5) to be set 00:27:33.168 [2024-04-24 05:23:10.348817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871b60 is same with the state(5) to be set 00:27:34.541 00:27:34.541 test: (groupid=0, jobs=1): err= 0: pid=1980820: Wed Apr 24 05:23:11 2024 00:27:34.541 read: IOPS=8123, BW=127MiB/s (133MB/s)(255MiB/2005msec) 00:27:34.541 slat (nsec): min=2815, max=91748, avg=3584.41, stdev=1675.88 00:27:34.541 clat (usec): min=3050, max=56865, avg=9447.05, stdev=3362.42 00:27:34.541 lat (usec): min=3054, max=56869, avg=9450.64, stdev=3362.41 00:27:34.541 clat percentiles (usec): 00:27:34.541 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7635], 00:27:34.541 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:27:34.541 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11731], 95.00th=[12649], 00:27:34.541 | 99.00th=[14615], 99.50th=[15533], 99.90th=[55313], 99.95th=[55837], 00:27:34.541 | 99.99th=[56361] 00:27:34.541 bw ( KiB/s): min=57248, max=77632, per=50.08%, avg=65096.00, stdev=8919.12, samples=4 00:27:34.541 iops : min= 3578, max= 4852, avg=4068.50, stdev=557.45, samples=4 00:27:34.541 write: IOPS=4768, BW=74.5MiB/s (78.1MB/s)(134MiB/1796msec); 0 zone resets 00:27:34.541 slat (usec): min=30, max=153, avg=33.36, stdev= 4.94 00:27:34.541 clat (usec): min=4800, max=58932, avg=11464.02, stdev=4123.34 00:27:34.541 lat (usec): min=4839, max=58963, avg=11497.38, stdev=4123.20 00:27:34.541 clat percentiles (usec): 00:27:34.541 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9503], 00:27:34.541 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:27:34.541 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14091], 95.00th=[15008], 00:27:34.541 | 99.00th=[16909], 99.50th=[54789], 99.90th=[57934], 99.95th=[58459], 00:27:34.541 | 99.99th=[58983] 00:27:34.541 bw ( KiB/s): min=60192, max=80032, per=89.02%, avg=67928.00, stdev=8659.27, samples=4 00:27:34.541 iops : min= 3762, max= 5002, avg=4245.50, stdev=541.20, samples=4 00:27:34.541 lat (msec) : 4=0.13%, 10=52.73%, 20=46.63%, 50=0.14%, 100=0.37% 00:27:34.541 cpu : usr=75.30%, sys=21.46%, ctx=39, majf=0, minf=63 00:27:34.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:34.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:34.541 issued rwts: total=16288,8565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:34.541 00:27:34.541 Run status group 0 (all jobs): 00:27:34.541 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (267MB), run=2005-2005msec 00:27:34.541 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=134MiB (140MB), run=1796-1796msec 00:27:34.541 05:23:11 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.541 05:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.541 05:23:11 -- common/autotest_common.sh@10 -- # set +x 00:27:34.541 05:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.541 05:23:11 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:34.541 05:23:11 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:34.541 05:23:11 -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:34.541 05:23:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:27:34.541 05:23:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:27:34.541 05:23:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:34.541 05:23:11 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:34.541 05:23:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:27:34.541 05:23:11 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:27:34.541 05:23:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:27:34.541 05:23:11 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:34.541 05:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.541 05:23:11 -- common/autotest_common.sh@10 -- # set +x 00:27:37.830 Nvme0n1 00:27:37.830 05:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.830 05:23:14 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:37.830 05:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.830 05:23:14 -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.364 05:23:17 -- host/fio.sh@51 -- # ls_guid=c3dda873-068c-4c13-be73-bf98afda9069 00:27:40.364 05:23:17 -- host/fio.sh@52 -- # get_lvs_free_mb c3dda873-068c-4c13-be73-bf98afda9069 00:27:40.364 05:23:17 -- common/autotest_common.sh@1350 -- # local lvs_uuid=c3dda873-068c-4c13-be73-bf98afda9069 00:27:40.364 05:23:17 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:40.364 05:23:17 -- common/autotest_common.sh@1352 -- # local fc 00:27:40.364 05:23:17 -- common/autotest_common.sh@1353 -- # local cs 00:27:40.364 05:23:17 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:40.364 05:23:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.364 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.364 05:23:17 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:40.364 { 00:27:40.364 "uuid": "c3dda873-068c-4c13-be73-bf98afda9069", 00:27:40.364 "name": "lvs_0", 00:27:40.364 "base_bdev": "Nvme0n1", 00:27:40.364 "total_data_clusters": 930, 00:27:40.364 "free_clusters": 930, 00:27:40.364 "block_size": 512, 00:27:40.364 "cluster_size": 1073741824 00:27:40.364 } 00:27:40.364 ]' 00:27:40.364 05:23:17 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="c3dda873-068c-4c13-be73-bf98afda9069") .free_clusters' 00:27:40.364 05:23:17 -- common/autotest_common.sh@1355 -- # fc=930 00:27:40.365 05:23:17 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="c3dda873-068c-4c13-be73-bf98afda9069") .cluster_size' 00:27:40.365 05:23:17 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:27:40.365 05:23:17 -- common/autotest_common.sh@1359 -- # free_mb=952320 00:27:40.365 05:23:17 -- common/autotest_common.sh@1360 -- # echo 952320 00:27:40.365 952320 00:27:40.365 05:23:17 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:40.365 05:23:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.365 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:27:40.365 680c01ef-8b6b-45e2-ba67-0d5c9df7c001 00:27:40.365 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.365 05:23:17 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:40.365 05:23:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.365 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:27:40.365 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.365 05:23:17 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:40.365 05:23:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.365 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:27:40.365 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.365 05:23:17 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:40.365 05:23:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.365 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:27:40.365 05:23:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.365 05:23:17 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:40.365 05:23:17 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:40.365 05:23:17 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:40.365 05:23:17 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:40.365 05:23:17 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:40.365 05:23:17 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:40.365 05:23:17 -- common/autotest_common.sh@1327 -- # shift 00:27:40.365 05:23:17 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:40.365 05:23:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.365 05:23:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.365 05:23:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:40.365 05:23:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.365 05:23:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.365 05:23:17 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:40.365 05:23:17 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:40.625 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:40.625 fio-3.35 00:27:40.625 Starting 1 thread 00:27:40.625 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.160 00:27:43.160 test: (groupid=0, jobs=1): err= 0: pid=1981850: Wed Apr 24 05:23:20 2024 00:27:43.160 read: IOPS=5954, BW=23.3MiB/s (24.4MB/s)(47.6MiB/2048msec) 00:27:43.160 slat (nsec): min=1904, max=141179, avg=2484.92, stdev=1881.48 00:27:43.160 clat (usec): min=975, max=171370, avg=11862.22, stdev=12041.47 00:27:43.160 lat (usec): min=977, max=171403, avg=11864.71, stdev=12041.73 00:27:43.160 clat percentiles (msec): 00:27:43.160 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:27:43.160 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:43.160 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:27:43.160 | 99.00th=[ 52], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:27:43.160 | 99.99th=[ 171] 00:27:43.160 bw ( KiB/s): min=17133, max=26920, per=100.00%, avg=24233.25, stdev=4739.67, samples=4 00:27:43.160 iops : min= 4283, max= 6730, avg=6058.25, stdev=1185.04, samples=4 00:27:43.160 write: IOPS=5935, BW=23.2MiB/s (24.3MB/s)(47.5MiB/2048msec); 0 zone resets 00:27:43.160 slat (usec): min=2, max=142, avg= 2.53, stdev= 1.53 00:27:43.160 clat (usec): min=356, max=169302, avg=9494.84, stdev=11248.78 00:27:43.160 lat (usec): min=359, max=169309, avg=9497.38, stdev=11249.09 00:27:43.160 clat percentiles (msec): 00:27:43.160 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:27:43.160 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:27:43.160 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:27:43.160 | 99.00th=[ 11], 99.50th=[ 57], 99.90th=[ 169], 99.95th=[ 169], 00:27:43.160 | 99.99th=[ 169] 00:27:43.160 bw ( KiB/s): min=18139, max=26280, per=100.00%, avg=24192.75, stdev=4036.47, samples=4 00:27:43.160 iops : min= 4534, max= 6570, avg=6048.00, stdev=1009.49, samples=4 00:27:43.160 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:43.160 lat (msec) : 2=0.03%, 4=0.14%, 10=57.49%, 20=41.28%, 50=0.13% 00:27:43.160 lat (msec) : 100=0.39%, 250=0.53% 00:27:43.160 cpu : usr=58.28%, sys=38.01%, ctx=124, majf=0, minf=39 00:27:43.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:43.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:43.160 issued rwts: total=12194,12155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:43.160 00:27:43.160 Run status group 0 (all jobs): 00:27:43.160 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=47.6MiB (49.9MB), run=2048-2048msec 00:27:43.160 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=47.5MiB (49.8MB), run=2048-2048msec 00:27:43.160 05:23:20 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:43.160 05:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.160 05:23:20 -- common/autotest_common.sh@10 -- # set +x 00:27:43.160 05:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.160 05:23:20 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:43.160 05:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.160 05:23:20 -- common/autotest_common.sh@10 -- # set +x 00:27:44.096 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.096 05:23:21 -- host/fio.sh@62 -- # ls_nested_guid=b884ad96-5c69-4576-b6a2-b0a627e61efa 00:27:44.096 05:23:21 -- host/fio.sh@63 -- # get_lvs_free_mb b884ad96-5c69-4576-b6a2-b0a627e61efa 00:27:44.096 05:23:21 -- common/autotest_common.sh@1350 -- # local lvs_uuid=b884ad96-5c69-4576-b6a2-b0a627e61efa 00:27:44.096 05:23:21 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:44.096 05:23:21 -- common/autotest_common.sh@1352 -- # local fc 00:27:44.096 05:23:21 -- common/autotest_common.sh@1353 -- # local cs 00:27:44.096 05:23:21 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:44.096 05:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.096 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:44.096 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.096 05:23:21 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:44.096 { 00:27:44.096 "uuid": "c3dda873-068c-4c13-be73-bf98afda9069", 00:27:44.096 "name": "lvs_0", 00:27:44.096 "base_bdev": "Nvme0n1", 00:27:44.096 "total_data_clusters": 930, 00:27:44.096 "free_clusters": 0, 00:27:44.096 "block_size": 512, 00:27:44.096 "cluster_size": 1073741824 00:27:44.096 }, 00:27:44.096 { 00:27:44.096 "uuid": "b884ad96-5c69-4576-b6a2-b0a627e61efa", 00:27:44.096 "name": "lvs_n_0", 00:27:44.096 "base_bdev": "680c01ef-8b6b-45e2-ba67-0d5c9df7c001", 00:27:44.096 "total_data_clusters": 237847, 00:27:44.096 "free_clusters": 237847, 00:27:44.096 "block_size": 512, 00:27:44.096 "cluster_size": 4194304 00:27:44.096 } 00:27:44.096 ]' 00:27:44.096 05:23:21 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="b884ad96-5c69-4576-b6a2-b0a627e61efa") .free_clusters' 00:27:44.096 05:23:21 -- common/autotest_common.sh@1355 -- # fc=237847 00:27:44.096 05:23:21 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="b884ad96-5c69-4576-b6a2-b0a627e61efa") .cluster_size' 00:27:44.096 05:23:21 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:44.096 05:23:21 -- common/autotest_common.sh@1359 -- # free_mb=951388 00:27:44.096 05:23:21 -- common/autotest_common.sh@1360 -- # echo 951388 00:27:44.096 951388 00:27:44.096 05:23:21 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:44.096 05:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.096 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:44.355 377bafc0-8399-439a-851b-e06aee8092e6 00:27:44.355 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.356 05:23:21 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:44.356 05:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.356 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:44.356 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.356 05:23:21 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:44.356 05:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.356 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:44.356 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.356 05:23:21 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:44.356 05:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.356 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:44.356 05:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.356 05:23:21 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:44.356 05:23:21 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:44.356 05:23:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:44.356 05:23:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:44.356 05:23:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:44.356 05:23:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:44.356 05:23:21 -- common/autotest_common.sh@1327 -- # shift 00:27:44.356 05:23:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:44.356 05:23:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:44.356 05:23:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:44.356 05:23:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:44.356 05:23:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.617 05:23:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:44.617 05:23:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:44.617 05:23:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:44.617 05:23:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:44.617 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:44.617 fio-3.35 00:27:44.617 Starting 1 thread 00:27:44.617 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.152 00:27:47.152 test: (groupid=0, jobs=1): err= 0: pid=1982428: Wed Apr 24 05:23:24 2024 00:27:47.152 read: IOPS=5850, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2008msec) 00:27:47.152 slat (nsec): min=1829, max=180170, avg=2408.26, stdev=2418.16 00:27:47.152 clat (usec): min=4403, max=19474, avg=12089.21, stdev=1046.07 00:27:47.152 lat (usec): min=4416, max=19476, avg=12091.62, stdev=1045.93 00:27:47.152 clat percentiles (usec): 00:27:47.152 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:27:47.152 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:27:47.152 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:27:47.152 | 99.00th=[14353], 99.50th=[14615], 99.90th=[17957], 99.95th=[18220], 00:27:47.152 | 99.99th=[19530] 00:27:47.152 bw ( KiB/s): min=22016, max=23984, per=99.84%, avg=23364.00, stdev=906.83, samples=4 00:27:47.152 iops : min= 5504, max= 5996, avg=5841.00, stdev=226.71, samples=4 00:27:47.152 write: IOPS=5837, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2008msec); 0 zone resets 00:27:47.152 slat (nsec): min=1964, max=153598, avg=2513.55, stdev=1699.78 00:27:47.152 clat (usec): min=2193, max=18320, avg=9637.26, stdev=926.86 00:27:47.152 lat (usec): min=2200, max=18322, avg=9639.77, stdev=926.81 00:27:47.152 clat percentiles (usec): 00:27:47.152 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:27:47.152 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:27:47.152 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:27:47.152 | 99.00th=[11600], 99.50th=[12125], 99.90th=[16319], 99.95th=[17957], 00:27:47.152 | 99.99th=[18220] 00:27:47.152 bw ( KiB/s): min=23056, max=23432, per=99.87%, avg=23318.00, stdev=177.61, samples=4 00:27:47.152 iops : min= 5764, max= 5858, avg=5829.50, stdev=44.40, samples=4 00:27:47.152 lat (msec) : 4=0.04%, 10=34.76%, 20=65.20% 00:27:47.152 cpu : usr=59.39%, sys=36.77%, ctx=105, majf=0, minf=39 00:27:47.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:47.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:47.152 issued rwts: total=11747,11721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:47.152 00:27:47.152 Run status group 0 (all jobs): 00:27:47.152 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2008-2008msec 00:27:47.152 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2008-2008msec 00:27:47.152 05:23:24 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:47.152 05:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.152 05:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:47.152 05:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.152 05:23:24 -- host/fio.sh@72 -- # sync 00:27:47.152 05:23:24 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:47.152 05:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.152 05:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:51.344 05:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.344 05:23:27 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:51.344 05:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.344 05:23:27 -- common/autotest_common.sh@10 -- # set +x 00:27:51.344 05:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.344 05:23:27 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:51.344 05:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.344 05:23:27 -- common/autotest_common.sh@10 -- # set +x 00:27:53.877 05:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.877 05:23:30 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:53.877 05:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.877 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:27:53.877 05:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.877 05:23:30 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:53.877 05:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.877 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:27:55.290 05:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.290 05:23:32 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:55.290 05:23:32 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:55.290 05:23:32 -- host/fio.sh@84 -- # nvmftestfini 00:27:55.290 05:23:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:55.290 05:23:32 -- nvmf/common.sh@117 -- # sync 00:27:55.290 05:23:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.290 05:23:32 -- nvmf/common.sh@120 -- # set +e 00:27:55.290 05:23:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.290 05:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:55.290 rmmod nvme_tcp 00:27:55.290 rmmod nvme_fabrics 00:27:55.290 rmmod nvme_keyring 00:27:55.290 05:23:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:55.290 05:23:32 -- nvmf/common.sh@124 -- # set -e 00:27:55.290 05:23:32 -- nvmf/common.sh@125 -- # return 0 00:27:55.290 05:23:32 -- nvmf/common.sh@478 -- # '[' -n 1980269 ']' 00:27:55.290 05:23:32 -- nvmf/common.sh@479 -- # killprocess 1980269 00:27:55.290 05:23:32 -- common/autotest_common.sh@936 -- # '[' -z 1980269 ']' 00:27:55.290 05:23:32 -- common/autotest_common.sh@940 -- # kill -0 1980269 00:27:55.290 05:23:32 -- common/autotest_common.sh@941 -- # uname 00:27:55.290 05:23:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:55.290 05:23:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1980269 00:27:55.290 05:23:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:55.290 05:23:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:55.290 05:23:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1980269' 00:27:55.290 killing process with pid 1980269 00:27:55.290 05:23:32 -- common/autotest_common.sh@955 -- # kill 1980269 00:27:55.290 05:23:32 -- common/autotest_common.sh@960 -- # wait 1980269 00:27:55.290 05:23:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:55.290 05:23:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:55.290 05:23:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:55.290 05:23:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.290 05:23:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.290 05:23:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.290 05:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.290 05:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.829 05:23:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.829 00:27:57.829 real 0m30.556s 00:27:57.829 user 1m50.901s 00:27:57.829 sys 0m5.889s 00:27:57.829 05:23:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:57.829 05:23:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.829 ************************************ 00:27:57.829 END TEST nvmf_fio_host 00:27:57.829 ************************************ 00:27:57.829 05:23:34 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:57.829 05:23:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:57.829 05:23:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:57.829 05:23:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.829 ************************************ 00:27:57.829 START TEST nvmf_failover 00:27:57.829 ************************************ 00:27:57.829 05:23:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:57.829 * Looking for test storage... 00:27:57.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.829 05:23:34 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.829 05:23:34 -- nvmf/common.sh@7 -- # uname -s 00:27:57.829 05:23:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.829 05:23:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.830 05:23:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.830 05:23:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.830 05:23:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.830 05:23:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.830 05:23:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.830 05:23:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.830 05:23:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.830 05:23:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.830 05:23:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.830 05:23:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.830 05:23:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.830 05:23:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.830 05:23:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.830 05:23:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.830 05:23:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.830 05:23:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.830 05:23:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.830 05:23:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.830 05:23:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.830 05:23:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.830 05:23:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.830 05:23:34 -- paths/export.sh@5 -- # export PATH 00:27:57.830 05:23:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.830 05:23:34 -- nvmf/common.sh@47 -- # : 0 00:27:57.830 05:23:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.830 05:23:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.830 05:23:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.830 05:23:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.830 05:23:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.830 05:23:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.830 05:23:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.830 05:23:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.830 05:23:34 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.830 05:23:34 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.830 05:23:34 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:57.830 05:23:34 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.830 05:23:34 -- host/failover.sh@18 -- # nvmftestinit 00:27:57.830 05:23:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:57.830 05:23:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.830 05:23:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:57.830 05:23:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:57.830 05:23:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:57.830 05:23:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.830 05:23:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.830 05:23:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.830 05:23:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:57.830 05:23:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:57.830 05:23:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.830 05:23:34 -- common/autotest_common.sh@10 -- # set +x 00:27:59.733 05:23:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:59.733 05:23:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.733 05:23:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.733 05:23:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.733 05:23:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.733 05:23:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.733 05:23:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.733 05:23:36 -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.733 05:23:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.733 05:23:36 -- nvmf/common.sh@296 -- # e810=() 00:27:59.733 05:23:36 -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.733 05:23:36 -- nvmf/common.sh@297 -- # x722=() 00:27:59.733 05:23:36 -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.733 05:23:36 -- nvmf/common.sh@298 -- # mlx=() 00:27:59.733 05:23:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.733 05:23:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.733 05:23:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.733 05:23:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.733 05:23:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.733 05:23:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.733 05:23:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.733 05:23:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.733 05:23:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.733 05:23:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.733 05:23:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.733 05:23:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.733 05:23:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.733 05:23:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.733 05:23:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.733 05:23:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.733 05:23:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.733 05:23:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.733 05:23:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:59.733 05:23:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:59.733 05:23:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:59.733 05:23:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.733 05:23:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.733 05:23:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.733 05:23:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.733 05:23:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.733 05:23:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.733 05:23:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.733 05:23:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.733 05:23:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.733 05:23:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.733 05:23:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.734 05:23:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.734 05:23:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.734 05:23:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.734 05:23:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.734 05:23:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.734 05:23:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.734 05:23:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.734 05:23:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.734 05:23:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:27:59.734 00:27:59.734 --- 10.0.0.2 ping statistics --- 00:27:59.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.734 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:59.734 05:23:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:59.734 00:27:59.734 --- 10.0.0.1 ping statistics --- 00:27:59.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.734 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:59.734 05:23:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.734 05:23:36 -- nvmf/common.sh@411 -- # return 0 00:27:59.734 05:23:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:59.734 05:23:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.734 05:23:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:59.734 05:23:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:59.734 05:23:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.734 05:23:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:59.734 05:23:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:59.734 05:23:36 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:59.734 05:23:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:59.734 05:23:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:59.734 05:23:36 -- common/autotest_common.sh@10 -- # set +x 00:27:59.734 05:23:36 -- nvmf/common.sh@470 -- # nvmfpid=1985540 00:27:59.734 05:23:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:59.734 05:23:36 -- nvmf/common.sh@471 -- # waitforlisten 1985540 00:27:59.734 05:23:36 -- common/autotest_common.sh@817 -- # '[' -z 1985540 ']' 00:27:59.734 05:23:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.734 05:23:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:59.734 05:23:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.734 05:23:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:59.734 05:23:36 -- common/autotest_common.sh@10 -- # set +x 00:27:59.734 [2024-04-24 05:23:36.883253] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:27:59.734 [2024-04-24 05:23:36.883337] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.734 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.734 [2024-04-24 05:23:36.921882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:59.734 [2024-04-24 05:23:36.954455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.992 [2024-04-24 05:23:37.043816] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.992 [2024-04-24 05:23:37.043887] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.992 [2024-04-24 05:23:37.043904] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.992 [2024-04-24 05:23:37.043919] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.992 [2024-04-24 05:23:37.043931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.992 [2024-04-24 05:23:37.044046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.992 [2024-04-24 05:23:37.044077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.992 [2024-04-24 05:23:37.044080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.992 05:23:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:59.992 05:23:37 -- common/autotest_common.sh@850 -- # return 0 00:27:59.992 05:23:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:59.992 05:23:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:59.992 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:27:59.992 05:23:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.992 05:23:37 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:00.249 [2024-04-24 05:23:37.445284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.249 05:23:37 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:00.507 Malloc0 00:28:00.507 05:23:37 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.765 05:23:38 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.330 05:23:38 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.330 [2024-04-24 05:23:38.574067] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.330 05:23:38 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.588 [2024-04-24 05:23:38.818641] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:01.588 05:23:38 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:01.846 [2024-04-24 05:23:39.063402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:01.846 05:23:39 -- host/failover.sh@31 -- # bdevperf_pid=1985833 00:28:01.846 05:23:39 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:01.846 05:23:39 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.846 05:23:39 -- host/failover.sh@34 -- # waitforlisten 1985833 /var/tmp/bdevperf.sock 00:28:01.846 05:23:39 -- common/autotest_common.sh@817 -- # '[' -z 1985833 ']' 00:28:01.846 05:23:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.846 05:23:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:01.846 05:23:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.846 05:23:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:01.846 05:23:39 -- common/autotest_common.sh@10 -- # set +x 00:28:02.412 05:23:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:02.412 05:23:39 -- common/autotest_common.sh@850 -- # return 0 00:28:02.412 05:23:39 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.669 NVMe0n1 00:28:02.669 05:23:39 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.926 00:28:03.185 05:23:40 -- host/failover.sh@39 -- # run_test_pid=1985970 00:28:03.185 05:23:40 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:03.185 05:23:40 -- host/failover.sh@41 -- # sleep 1 00:28:04.124 05:23:41 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.382 [2024-04-24 05:23:41.472391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 [2024-04-24 05:23:41.472573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94e40 is same with the state(5) to be set 00:28:04.382 05:23:41 -- host/failover.sh@45 -- # sleep 3 00:28:07.663 05:23:44 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.663 00:28:07.663 05:23:44 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:07.922 05:23:45 -- host/failover.sh@50 -- # sleep 3 00:28:11.213 05:23:48 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.213 [2024-04-24 05:23:48.282524] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.213 05:23:48 -- host/failover.sh@55 -- # sleep 1 00:28:12.150 05:23:49 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:12.409 [2024-04-24 05:23:49.572297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 [2024-04-24 05:23:49.572432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96830 is same with the state(5) to be set 00:28:12.409 05:23:49 -- host/failover.sh@59 -- # wait 1985970 00:28:18.983 0 00:28:18.983 05:23:55 -- host/failover.sh@61 -- # killprocess 1985833 00:28:18.983 05:23:55 -- common/autotest_common.sh@936 -- # '[' -z 1985833 ']' 00:28:18.983 05:23:55 -- common/autotest_common.sh@940 -- # kill -0 1985833 00:28:18.983 05:23:55 -- common/autotest_common.sh@941 -- # uname 00:28:18.983 05:23:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:18.983 05:23:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1985833 00:28:18.983 05:23:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:18.983 05:23:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:18.983 05:23:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1985833' 00:28:18.983 killing process with pid 1985833 00:28:18.983 05:23:55 -- common/autotest_common.sh@955 -- # kill 1985833 00:28:18.984 05:23:55 -- common/autotest_common.sh@960 -- # wait 1985833 00:28:18.984 05:23:55 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:18.984 [2024-04-24 05:23:39.125199] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:18.984 [2024-04-24 05:23:39.125277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985833 ] 00:28:18.984 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.984 [2024-04-24 05:23:39.156438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:18.984 [2024-04-24 05:23:39.184511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.984 [2024-04-24 05:23:39.267396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.984 Running I/O for 15 seconds... 00:28:18.984 [2024-04-24 05:23:41.472930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.472973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.473501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.984 [2024-04-24 05:23:41.473974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.473989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.474002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.474017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.474045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.984 [2024-04-24 05:23:41.474062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.984 [2024-04-24 05:23:41.474076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.474993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.985 [2024-04-24 05:23:41.475205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.985 [2024-04-24 05:23:41.475220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.986 [2024-04-24 05:23:41.475343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.475984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.475998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.986 [2024-04-24 05:23:41.476351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.986 [2024-04-24 05:23:41.476365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:41.476745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f61e0 is same with the state(5) to be set 00:28:18.987 [2024-04-24 05:23:41.476777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.987 [2024-04-24 05:23:41.476788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.987 [2024-04-24 05:23:41.476800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:28:18.987 [2024-04-24 05:23:41.476813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476887] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f61e0 was disconnected and freed. reset controller. 00:28:18.987 [2024-04-24 05:23:41.476905] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:18.987 [2024-04-24 05:23:41.476952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.987 [2024-04-24 05:23:41.476970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.476985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.987 [2024-04-24 05:23:41.476998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.477027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.987 [2024-04-24 05:23:41.477040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.477054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.987 [2024-04-24 05:23:41.477075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:41.477092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.987 [2024-04-24 05:23:41.480446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.987 [2024-04-24 05:23:41.480487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7290 (9): Bad file descriptor 00:28:18.987 [2024-04-24 05:23:41.556713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.987 [2024-04-24 05:23:45.044402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.987 [2024-04-24 05:23:45.044473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.044975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.044989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.045013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.045027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.987 [2024-04-24 05:23:45.045042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.987 [2024-04-24 05:23:45.045056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.988 [2024-04-24 05:23:45.045814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.988 [2024-04-24 05:23:45.045827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.045856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.045885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.045914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.045943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.045972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.045993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.046007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.989 [2024-04-24 05:23:45.046039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.989 [2024-04-24 05:23:45.046855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.989 [2024-04-24 05:23:45.046901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:28:18.989 [2024-04-24 05:23:45.046915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.989 [2024-04-24 05:23:45.046949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.989 [2024-04-24 05:23:45.046960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:28:18.989 [2024-04-24 05:23:45.046972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.046986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.989 [2024-04-24 05:23:45.046997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.989 [2024-04-24 05:23:45.047008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:28:18.989 [2024-04-24 05:23:45.047020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.989 [2024-04-24 05:23:45.047033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.989 [2024-04-24 05:23:45.047044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.047959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.047973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.047984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.047996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.048008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.048019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.048040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.048051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.048064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.048075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.048085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.048098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.048110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.048121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.048131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.048143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.048156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.048167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.048177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.048189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.990 [2024-04-24 05:23:45.048202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.990 [2024-04-24 05:23:45.048212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.990 [2024-04-24 05:23:45.048223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:28:18.990 [2024-04-24 05:23:45.048235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.048924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.048935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.048953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.048991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.991 [2024-04-24 05:23:45.049279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.991 [2024-04-24 05:23:45.049290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:28:18.991 [2024-04-24 05:23:45.049302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049372] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f8220 was disconnected and freed. reset controller. 00:28:18.991 [2024-04-24 05:23:45.049390] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:18.991 [2024-04-24 05:23:45.049437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.991 [2024-04-24 05:23:45.049461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.991 [2024-04-24 05:23:45.049498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.991 [2024-04-24 05:23:45.049514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.991 [2024-04-24 05:23:45.049527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:45.049541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.992 [2024-04-24 05:23:45.049554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:45.049566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.992 [2024-04-24 05:23:45.049623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7290 (9): Bad file descriptor 00:28:18.992 [2024-04-24 05:23:45.052918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.992 [2024-04-24 05:23:45.175394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.992 [2024-04-24 05:23:49.572576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.572960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.572974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.992 [2024-04-24 05:23:49.573814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.992 [2024-04-24 05:23:49.573828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.573841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.573856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.573873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.573888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.573901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.573916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.573929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.573965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.573978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.573992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.993 [2024-04-24 05:23:49.574287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.993 [2024-04-24 05:23:49.574834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.993 [2024-04-24 05:23:49.574849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.994 [2024-04-24 05:23:49.574862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.574877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.574890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.574904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.574918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.574933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.574946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.574964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.574978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.574993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.994 [2024-04-24 05:23:49.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.994 [2024-04-24 05:23:49.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.575983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.575997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.576010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.576024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.994 [2024-04-24 05:23:49.576037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.576051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.994 [2024-04-24 05:23:49.576064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.994 [2024-04-24 05:23:49.576079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.995 [2024-04-24 05:23:49.576501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e37f0 is same with the state(5) to be set 00:28:18.995 [2024-04-24 05:23:49.576533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.995 [2024-04-24 05:23:49.576544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.995 [2024-04-24 05:23:49.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28936 len:8 PRP1 0x0 PRP2 0x0 00:28:18.995 [2024-04-24 05:23:49.576575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576666] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8e37f0 was disconnected and freed. reset controller. 00:28:18.995 [2024-04-24 05:23:49.576686] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:18.995 [2024-04-24 05:23:49.576720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.995 [2024-04-24 05:23:49.576738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.995 [2024-04-24 05:23:49.576765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.995 [2024-04-24 05:23:49.576792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.995 [2024-04-24 05:23:49.576818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.995 [2024-04-24 05:23:49.576831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.995 [2024-04-24 05:23:49.576883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7290 (9): Bad file descriptor 00:28:18.995 [2024-04-24 05:23:49.580216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.995 [2024-04-24 05:23:49.737853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.995 00:28:18.995 Latency(us) 00:28:18.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.995 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:18.995 Verification LBA range: start 0x0 length 0x4000 00:28:18.995 NVMe0n1 : 15.01 8394.97 32.79 939.69 0.00 13685.67 794.93 13689.74 00:28:18.995 =================================================================================================================== 00:28:18.995 Total : 8394.97 32.79 939.69 0.00 13685.67 794.93 13689.74 00:28:18.995 Received shutdown signal, test time was about 15.000000 seconds 00:28:18.995 00:28:18.995 Latency(us) 00:28:18.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.995 =================================================================================================================== 00:28:18.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.995 05:23:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:18.995 05:23:55 -- host/failover.sh@65 -- # count=3 00:28:18.995 05:23:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:18.995 05:23:55 -- host/failover.sh@73 -- # bdevperf_pid=1987803 00:28:18.995 05:23:55 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:18.995 05:23:55 -- host/failover.sh@75 -- # waitforlisten 1987803 /var/tmp/bdevperf.sock 00:28:18.995 05:23:55 -- common/autotest_common.sh@817 -- # '[' -z 1987803 ']' 00:28:18.995 05:23:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.995 05:23:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:18.995 05:23:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.995 05:23:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:18.995 05:23:55 -- common/autotest_common.sh@10 -- # set +x 00:28:18.995 05:23:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:18.995 05:23:55 -- common/autotest_common.sh@850 -- # return 0 00:28:18.995 05:23:55 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:18.995 [2024-04-24 05:23:56.084047] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.995 05:23:56 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:19.254 [2024-04-24 05:23:56.320690] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:19.254 05:23:56 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:19.513 NVMe0n1 00:28:19.513 05:23:56 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.082 00:28:20.082 05:23:57 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.340 00:28:20.340 05:23:57 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.340 05:23:57 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:20.598 05:23:57 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.856 05:23:57 -- host/failover.sh@87 -- # sleep 3 00:28:24.142 05:24:00 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:24.142 05:24:00 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:24.142 05:24:01 -- host/failover.sh@90 -- # run_test_pid=1988546 00:28:24.142 05:24:01 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.142 05:24:01 -- host/failover.sh@92 -- # wait 1988546 00:28:25.518 0 00:28:25.518 05:24:02 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:25.518 [2024-04-24 05:23:55.625448] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:25.518 [2024-04-24 05:23:55.625531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987803 ] 00:28:25.518 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.518 [2024-04-24 05:23:55.657432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:25.518 [2024-04-24 05:23:55.685898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.518 [2024-04-24 05:23:55.767821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.518 [2024-04-24 05:23:57.964849] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:25.518 [2024-04-24 05:23:57.964929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.518 [2024-04-24 05:23:57.964952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.518 [2024-04-24 05:23:57.964969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.518 [2024-04-24 05:23:57.964982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.518 [2024-04-24 05:23:57.964996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.518 [2024-04-24 05:23:57.965009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.518 [2024-04-24 05:23:57.965022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.518 [2024-04-24 05:23:57.965035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.518 [2024-04-24 05:23:57.965049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.518 [2024-04-24 05:23:57.965103] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.518 [2024-04-24 05:23:57.965134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b15290 (9): Bad file descriptor 00:28:25.518 [2024-04-24 05:23:57.975411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:25.518 Running I/O for 1 seconds... 00:28:25.518 00:28:25.518 Latency(us) 00:28:25.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.518 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:25.518 Verification LBA range: start 0x0 length 0x4000 00:28:25.518 NVMe0n1 : 1.01 8637.03 33.74 0.00 0.00 14762.68 2936.98 12718.84 00:28:25.518 =================================================================================================================== 00:28:25.518 Total : 8637.03 33.74 0.00 0.00 14762.68 2936.98 12718.84 00:28:25.518 05:24:02 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.518 05:24:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:25.518 05:24:02 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:25.776 05:24:02 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.776 05:24:02 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:26.034 05:24:03 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.291 05:24:03 -- host/failover.sh@101 -- # sleep 3 00:28:29.581 05:24:06 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.581 05:24:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:29.581 05:24:06 -- host/failover.sh@108 -- # killprocess 1987803 00:28:29.581 05:24:06 -- common/autotest_common.sh@936 -- # '[' -z 1987803 ']' 00:28:29.581 05:24:06 -- common/autotest_common.sh@940 -- # kill -0 1987803 00:28:29.581 05:24:06 -- common/autotest_common.sh@941 -- # uname 00:28:29.581 05:24:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:29.581 05:24:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1987803 00:28:29.581 05:24:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:29.581 05:24:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:29.581 05:24:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1987803' 00:28:29.581 killing process with pid 1987803 00:28:29.581 05:24:06 -- common/autotest_common.sh@955 -- # kill 1987803 00:28:29.581 05:24:06 -- common/autotest_common.sh@960 -- # wait 1987803 00:28:29.839 05:24:06 -- host/failover.sh@110 -- # sync 00:28:29.839 05:24:06 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.097 05:24:07 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:30.097 05:24:07 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.097 05:24:07 -- host/failover.sh@116 -- # nvmftestfini 00:28:30.097 05:24:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:30.097 05:24:07 -- nvmf/common.sh@117 -- # sync 00:28:30.097 05:24:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.097 05:24:07 -- nvmf/common.sh@120 -- # set +e 00:28:30.097 05:24:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.097 05:24:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.097 rmmod nvme_tcp 00:28:30.097 rmmod nvme_fabrics 00:28:30.097 rmmod nvme_keyring 00:28:30.097 05:24:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.097 05:24:07 -- nvmf/common.sh@124 -- # set -e 00:28:30.097 05:24:07 -- nvmf/common.sh@125 -- # return 0 00:28:30.097 05:24:07 -- nvmf/common.sh@478 -- # '[' -n 1985540 ']' 00:28:30.097 05:24:07 -- nvmf/common.sh@479 -- # killprocess 1985540 00:28:30.097 05:24:07 -- common/autotest_common.sh@936 -- # '[' -z 1985540 ']' 00:28:30.097 05:24:07 -- common/autotest_common.sh@940 -- # kill -0 1985540 00:28:30.097 05:24:07 -- common/autotest_common.sh@941 -- # uname 00:28:30.097 05:24:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:30.097 05:24:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1985540 00:28:30.097 05:24:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:30.097 05:24:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:30.097 05:24:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1985540' 00:28:30.097 killing process with pid 1985540 00:28:30.097 05:24:07 -- common/autotest_common.sh@955 -- # kill 1985540 00:28:30.097 05:24:07 -- common/autotest_common.sh@960 -- # wait 1985540 00:28:30.355 05:24:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:30.355 05:24:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:30.355 05:24:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:30.355 05:24:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.355 05:24:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:30.355 05:24:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.355 05:24:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.355 05:24:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.293 05:24:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.293 00:28:32.293 real 0m34.827s 00:28:32.293 user 2m3.242s 00:28:32.293 sys 0m5.639s 00:28:32.293 05:24:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:32.293 05:24:09 -- common/autotest_common.sh@10 -- # set +x 00:28:32.293 ************************************ 00:28:32.293 END TEST nvmf_failover 00:28:32.293 ************************************ 00:28:32.552 05:24:09 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:32.552 05:24:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:32.552 05:24:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.552 05:24:09 -- common/autotest_common.sh@10 -- # set +x 00:28:32.552 ************************************ 00:28:32.552 START TEST nvmf_discovery 00:28:32.552 ************************************ 00:28:32.552 05:24:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:32.552 * Looking for test storage... 00:28:32.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.552 05:24:09 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.552 05:24:09 -- nvmf/common.sh@7 -- # uname -s 00:28:32.552 05:24:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.552 05:24:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.552 05:24:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.552 05:24:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.552 05:24:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.552 05:24:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.552 05:24:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.552 05:24:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.552 05:24:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.552 05:24:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.552 05:24:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.552 05:24:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.552 05:24:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.552 05:24:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.552 05:24:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.552 05:24:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.552 05:24:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.552 05:24:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.552 05:24:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.552 05:24:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.552 05:24:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.552 05:24:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.552 05:24:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.552 05:24:09 -- paths/export.sh@5 -- # export PATH 00:28:32.552 05:24:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.552 05:24:09 -- nvmf/common.sh@47 -- # : 0 00:28:32.552 05:24:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.552 05:24:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.552 05:24:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.552 05:24:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.552 05:24:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.552 05:24:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.552 05:24:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.552 05:24:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.552 05:24:09 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:32.552 05:24:09 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:32.552 05:24:09 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:32.552 05:24:09 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:32.552 05:24:09 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:32.552 05:24:09 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:32.552 05:24:09 -- host/discovery.sh@25 -- # nvmftestinit 00:28:32.552 05:24:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:32.552 05:24:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.552 05:24:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:32.552 05:24:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:32.552 05:24:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:32.552 05:24:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.552 05:24:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.552 05:24:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.552 05:24:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:32.552 05:24:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:32.552 05:24:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.552 05:24:09 -- common/autotest_common.sh@10 -- # set +x 00:28:34.454 05:24:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:34.454 05:24:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.454 05:24:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.454 05:24:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.454 05:24:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.454 05:24:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.454 05:24:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.454 05:24:11 -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.454 05:24:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.454 05:24:11 -- nvmf/common.sh@296 -- # e810=() 00:28:34.454 05:24:11 -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.454 05:24:11 -- nvmf/common.sh@297 -- # x722=() 00:28:34.454 05:24:11 -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.454 05:24:11 -- nvmf/common.sh@298 -- # mlx=() 00:28:34.454 05:24:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.454 05:24:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.454 05:24:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.455 05:24:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.455 05:24:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.455 05:24:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.455 05:24:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.455 05:24:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:34.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:34.455 05:24:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.455 05:24:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:34.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:34.455 05:24:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.455 05:24:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.455 05:24:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.455 05:24:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:34.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:34.455 05:24:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.455 05:24:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.455 05:24:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.455 05:24:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.455 05:24:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:34.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:34.455 05:24:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.455 05:24:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:34.455 05:24:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:34.455 05:24:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:34.455 05:24:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.455 05:24:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.455 05:24:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.455 05:24:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.455 05:24:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.455 05:24:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.455 05:24:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.455 05:24:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.455 05:24:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.455 05:24:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.455 05:24:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.455 05:24:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.455 05:24:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.455 05:24:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.455 05:24:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.455 05:24:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.455 05:24:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.455 05:24:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.455 05:24:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.714 05:24:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:28:34.714 00:28:34.714 --- 10.0.0.2 ping statistics --- 00:28:34.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.714 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:28:34.714 05:24:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:34.714 00:28:34.714 --- 10.0.0.1 ping statistics --- 00:28:34.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.714 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:34.714 05:24:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.714 05:24:11 -- nvmf/common.sh@411 -- # return 0 00:28:34.714 05:24:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:34.714 05:24:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.714 05:24:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:34.714 05:24:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:34.714 05:24:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.714 05:24:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:34.714 05:24:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:34.714 05:24:11 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:34.714 05:24:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:34.714 05:24:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:34.714 05:24:11 -- common/autotest_common.sh@10 -- # set +x 00:28:34.714 05:24:11 -- nvmf/common.sh@470 -- # nvmfpid=1991696 00:28:34.714 05:24:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:34.714 05:24:11 -- nvmf/common.sh@471 -- # waitforlisten 1991696 00:28:34.714 05:24:11 -- common/autotest_common.sh@817 -- # '[' -z 1991696 ']' 00:28:34.714 05:24:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.714 05:24:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:34.714 05:24:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.714 05:24:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:34.714 05:24:11 -- common/autotest_common.sh@10 -- # set +x 00:28:34.714 [2024-04-24 05:24:11.804382] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:34.714 [2024-04-24 05:24:11.804455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.714 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.714 [2024-04-24 05:24:11.840453] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:34.714 [2024-04-24 05:24:11.867355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.714 [2024-04-24 05:24:11.949874] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.714 [2024-04-24 05:24:11.949963] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.714 [2024-04-24 05:24:11.949976] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.715 [2024-04-24 05:24:11.949988] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.715 [2024-04-24 05:24:11.949998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.715 [2024-04-24 05:24:11.950031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.973 05:24:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:34.973 05:24:12 -- common/autotest_common.sh@850 -- # return 0 00:28:34.973 05:24:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:34.973 05:24:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 05:24:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.973 05:24:12 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.973 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 [2024-04-24 05:24:12.088493] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.973 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.973 05:24:12 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:34.973 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 [2024-04-24 05:24:12.096707] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:34.973 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.973 05:24:12 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:34.973 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 null0 00:28:34.973 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.973 05:24:12 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:34.973 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 null1 00:28:34.973 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.973 05:24:12 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:34.973 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.973 05:24:12 -- host/discovery.sh@45 -- # hostpid=1991721 00:28:34.973 05:24:12 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:34.973 05:24:12 -- host/discovery.sh@46 -- # waitforlisten 1991721 /tmp/host.sock 00:28:34.973 05:24:12 -- common/autotest_common.sh@817 -- # '[' -z 1991721 ']' 00:28:34.973 05:24:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:34.973 05:24:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:34.973 05:24:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:34.973 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:34.973 05:24:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:34.973 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:34.973 [2024-04-24 05:24:12.168367] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:34.973 [2024-04-24 05:24:12.168445] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991721 ] 00:28:34.973 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.973 [2024-04-24 05:24:12.200987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:34.973 [2024-04-24 05:24:12.231272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.231 [2024-04-24 05:24:12.322395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.231 05:24:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:35.231 05:24:12 -- common/autotest_common.sh@850 -- # return 0 00:28:35.231 05:24:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.231 05:24:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:35.231 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.231 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.231 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.231 05:24:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:35.231 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.231 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.231 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.231 05:24:12 -- host/discovery.sh@72 -- # notify_id=0 00:28:35.231 05:24:12 -- host/discovery.sh@83 -- # get_subsystem_names 00:28:35.231 05:24:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.231 05:24:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.231 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.231 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.231 05:24:12 -- host/discovery.sh@59 -- # sort 00:28:35.231 05:24:12 -- host/discovery.sh@59 -- # xargs 00:28:35.231 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.231 05:24:12 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:35.231 05:24:12 -- host/discovery.sh@84 -- # get_bdev_list 00:28:35.231 05:24:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.231 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.231 05:24:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.231 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.231 05:24:12 -- host/discovery.sh@55 -- # sort 00:28:35.231 05:24:12 -- host/discovery.sh@55 -- # xargs 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:35.489 05:24:12 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@87 -- # get_subsystem_names 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # sort 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # xargs 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:35.489 05:24:12 -- host/discovery.sh@88 -- # get_bdev_list 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # sort 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # xargs 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:35.489 05:24:12 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@91 -- # get_subsystem_names 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # sort 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- host/discovery.sh@59 -- # xargs 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:35.489 05:24:12 -- host/discovery.sh@92 -- # get_bdev_list 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.489 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # sort 00:28:35.489 05:24:12 -- host/discovery.sh@55 -- # xargs 00:28:35.489 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.489 05:24:12 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:35.489 05:24:12 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.489 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.490 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.490 [2024-04-24 05:24:12.738446] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.490 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.490 05:24:12 -- host/discovery.sh@97 -- # get_subsystem_names 00:28:35.490 05:24:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.490 05:24:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.490 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.490 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.490 05:24:12 -- host/discovery.sh@59 -- # sort 00:28:35.490 05:24:12 -- host/discovery.sh@59 -- # xargs 00:28:35.490 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.748 05:24:12 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:35.748 05:24:12 -- host/discovery.sh@98 -- # get_bdev_list 00:28:35.748 05:24:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.748 05:24:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.748 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.748 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.748 05:24:12 -- host/discovery.sh@55 -- # sort 00:28:35.748 05:24:12 -- host/discovery.sh@55 -- # xargs 00:28:35.748 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.748 05:24:12 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:35.748 05:24:12 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:35.748 05:24:12 -- host/discovery.sh@79 -- # expected_count=0 00:28:35.748 05:24:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:35.748 05:24:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:35.748 05:24:12 -- common/autotest_common.sh@901 -- # local max=10 00:28:35.748 05:24:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:35.748 05:24:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:35.748 05:24:12 -- host/discovery.sh@74 -- # jq '. | length' 00:28:35.748 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.748 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.748 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.748 05:24:12 -- host/discovery.sh@74 -- # notification_count=0 00:28:35.748 05:24:12 -- host/discovery.sh@75 -- # notify_id=0 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:35.748 05:24:12 -- common/autotest_common.sh@904 -- # return 0 00:28:35.748 05:24:12 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:35.748 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.748 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.748 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.748 05:24:12 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:35.748 05:24:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:35.748 05:24:12 -- common/autotest_common.sh@901 -- # local max=10 00:28:35.748 05:24:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:35.748 05:24:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.748 05:24:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.748 05:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.748 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.748 05:24:12 -- host/discovery.sh@59 -- # sort 00:28:35.748 05:24:12 -- host/discovery.sh@59 -- # xargs 00:28:35.748 05:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.748 05:24:12 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:28:35.748 05:24:12 -- common/autotest_common.sh@906 -- # sleep 1 00:28:36.313 [2024-04-24 05:24:13.496810] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:36.313 [2024-04-24 05:24:13.496834] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:36.313 [2024-04-24 05:24:13.496863] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:36.313 [2024-04-24 05:24:13.583150] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:36.572 [2024-04-24 05:24:13.646791] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:36.572 [2024-04-24 05:24:13.646812] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:36.831 05:24:13 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:36.831 05:24:13 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:36.831 05:24:13 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:36.831 05:24:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:36.831 05:24:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:36.831 05:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.831 05:24:13 -- host/discovery.sh@59 -- # sort 00:28:36.831 05:24:13 -- common/autotest_common.sh@10 -- # set +x 00:28:36.831 05:24:13 -- host/discovery.sh@59 -- # xargs 00:28:36.831 05:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.831 05:24:13 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.831 05:24:13 -- common/autotest_common.sh@904 -- # return 0 00:28:36.831 05:24:13 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:36.831 05:24:13 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:36.831 05:24:13 -- common/autotest_common.sh@901 -- # local max=10 00:28:36.831 05:24:13 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:36.831 05:24:13 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:36.831 05:24:13 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:36.831 05:24:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.831 05:24:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:36.831 05:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.831 05:24:13 -- common/autotest_common.sh@10 -- # set +x 00:28:36.831 05:24:13 -- host/discovery.sh@55 -- # sort 00:28:36.831 05:24:13 -- host/discovery.sh@55 -- # xargs 00:28:36.831 05:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:36.831 05:24:14 -- common/autotest_common.sh@904 -- # return 0 00:28:36.831 05:24:14 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:36.831 05:24:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:36.831 05:24:14 -- common/autotest_common.sh@901 -- # local max=10 00:28:36.831 05:24:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:36.831 05:24:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:36.831 05:24:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:36.831 05:24:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.831 05:24:14 -- common/autotest_common.sh@10 -- # set +x 00:28:36.831 05:24:14 -- host/discovery.sh@63 -- # sort -n 00:28:36.831 05:24:14 -- host/discovery.sh@63 -- # xargs 00:28:36.831 05:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:28:36.831 05:24:14 -- common/autotest_common.sh@904 -- # return 0 00:28:36.831 05:24:14 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:36.831 05:24:14 -- host/discovery.sh@79 -- # expected_count=1 00:28:36.831 05:24:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:36.831 05:24:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:36.831 05:24:14 -- common/autotest_common.sh@901 -- # local max=10 00:28:36.831 05:24:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:36.831 05:24:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:36.831 05:24:14 -- host/discovery.sh@74 -- # jq '. | length' 00:28:36.831 05:24:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.831 05:24:14 -- common/autotest_common.sh@10 -- # set +x 00:28:36.831 05:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.831 05:24:14 -- host/discovery.sh@74 -- # notification_count=1 00:28:36.831 05:24:14 -- host/discovery.sh@75 -- # notify_id=1 00:28:36.831 05:24:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:36.831 05:24:14 -- common/autotest_common.sh@904 -- # return 0 00:28:36.831 05:24:14 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:36.831 05:24:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.831 05:24:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.089 05:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.089 05:24:14 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:37.089 05:24:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:37.089 05:24:14 -- common/autotest_common.sh@901 -- # local max=10 00:28:37.089 05:24:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:37.089 05:24:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:37.089 05:24:14 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:37.089 05:24:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.089 05:24:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.089 05:24:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.090 05:24:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.090 05:24:14 -- host/discovery.sh@55 -- # sort 00:28:37.090 05:24:14 -- host/discovery.sh@55 -- # xargs 00:28:37.090 05:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.090 05:24:14 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:37.090 05:24:14 -- common/autotest_common.sh@904 -- # return 0 00:28:37.090 05:24:14 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:37.090 05:24:14 -- host/discovery.sh@79 -- # expected_count=1 00:28:37.090 05:24:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:37.090 05:24:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:37.090 05:24:14 -- common/autotest_common.sh@901 -- # local max=10 00:28:37.090 05:24:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:37.090 05:24:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:37.090 05:24:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:37.090 05:24:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:37.090 05:24:14 -- host/discovery.sh@74 -- # jq '. | length' 00:28:37.090 05:24:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.090 05:24:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.090 05:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.350 05:24:14 -- host/discovery.sh@74 -- # notification_count=0 00:28:37.350 05:24:14 -- host/discovery.sh@75 -- # notify_id=1 00:28:37.350 05:24:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:37.350 05:24:14 -- common/autotest_common.sh@906 -- # sleep 1 00:28:38.287 05:24:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:38.287 05:24:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:38.287 05:24:15 -- host/discovery.sh@74 -- # jq '. | length' 00:28:38.287 05:24:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.287 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.287 05:24:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.287 05:24:15 -- host/discovery.sh@74 -- # notification_count=1 00:28:38.287 05:24:15 -- host/discovery.sh@75 -- # notify_id=2 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:38.287 05:24:15 -- common/autotest_common.sh@904 -- # return 0 00:28:38.287 05:24:15 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:38.287 05:24:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.287 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.287 [2024-04-24 05:24:15.422672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:38.287 [2024-04-24 05:24:15.423155] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:38.287 [2024-04-24 05:24:15.423197] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.287 05:24:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.287 05:24:15 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:38.287 05:24:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:38.287 05:24:15 -- common/autotest_common.sh@901 -- # local max=10 00:28:38.287 05:24:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:38.287 05:24:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:38.287 05:24:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.287 05:24:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:38.287 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.287 05:24:15 -- host/discovery.sh@59 -- # sort 00:28:38.287 05:24:15 -- host/discovery.sh@59 -- # xargs 00:28:38.287 05:24:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.287 05:24:15 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.287 05:24:15 -- common/autotest_common.sh@904 -- # return 0 00:28:38.287 05:24:15 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:38.287 05:24:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:38.288 05:24:15 -- common/autotest_common.sh@901 -- # local max=10 00:28:38.288 05:24:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:38.288 05:24:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.288 05:24:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.288 05:24:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.288 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.288 05:24:15 -- host/discovery.sh@55 -- # xargs 00:28:38.288 05:24:15 -- host/discovery.sh@55 -- # sort 00:28:38.288 05:24:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.288 [2024-04-24 05:24:15.509699] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:38.288 05:24:15 -- common/autotest_common.sh@904 -- # return 0 00:28:38.288 05:24:15 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:38.288 05:24:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:38.288 05:24:15 -- common/autotest_common.sh@901 -- # local max=10 00:28:38.288 05:24:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:38.288 05:24:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:38.288 05:24:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.288 05:24:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:38.288 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.288 05:24:15 -- host/discovery.sh@63 -- # sort -n 00:28:38.288 05:24:15 -- host/discovery.sh@63 -- # xargs 00:28:38.288 05:24:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.288 05:24:15 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:38.288 05:24:15 -- common/autotest_common.sh@906 -- # sleep 1 00:28:38.547 [2024-04-24 05:24:15.612440] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.547 [2024-04-24 05:24:15.612463] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:38.547 [2024-04-24 05:24:15.612473] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:39.485 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:39.485 05:24:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:39.485 05:24:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.485 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.485 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.485 05:24:16 -- host/discovery.sh@63 -- # sort -n 00:28:39.485 05:24:16 -- host/discovery.sh@63 -- # xargs 00:28:39.485 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:39.485 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.485 05:24:16 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:39.485 05:24:16 -- host/discovery.sh@79 -- # expected_count=0 00:28:39.485 05:24:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:39.485 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:39.485 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.485 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:39.485 05:24:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:39.485 05:24:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.485 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.485 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.485 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.485 05:24:16 -- host/discovery.sh@74 -- # notification_count=0 00:28:39.485 05:24:16 -- host/discovery.sh@75 -- # notify_id=2 00:28:39.485 05:24:16 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:39.485 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.485 05:24:16 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:39.485 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.486 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.486 [2024-04-24 05:24:16.654850] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:39.486 [2024-04-24 05:24:16.654890] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:39.486 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.486 05:24:16 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:39.486 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:39.486 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.486 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.486 [2024-04-24 05:24:16.658995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.486 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:39.486 [2024-04-24 05:24:16.659031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.486 [2024-04-24 05:24:16.659049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.486 [2024-04-24 05:24:16.659064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.486 [2024-04-24 05:24:16.659078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.486 [2024-04-24 05:24:16.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.486 [2024-04-24 05:24:16.659108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.486 [2024-04-24 05:24:16.659122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.486 [2024-04-24 05:24:16.659136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 05:24:16 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:39.486 05:24:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:39.486 05:24:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:39.486 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.486 05:24:16 -- host/discovery.sh@59 -- # sort 00:28:39.486 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.486 05:24:16 -- host/discovery.sh@59 -- # xargs 00:28:39.486 [2024-04-24 05:24:16.669001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.486 [2024-04-24 05:24:16.679031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 [2024-04-24 05:24:16.679334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.679484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.679511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.679528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.679551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.679588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.679606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.679642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.679665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.486 [2024-04-24 05:24:16.689117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 [2024-04-24 05:24:16.689385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.689541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.689566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.689582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.689605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.689639] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.689656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.689669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.689689] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.486 05:24:16 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.486 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.486 05:24:16 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:39.486 [2024-04-24 05:24:16.699199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:39.486 [2024-04-24 05:24:16.699427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.486 [2024-04-24 05:24:16.699579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.486 [2024-04-24 05:24:16.699605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.699623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.699660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.699696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.699714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.699728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.699747] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.486 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:39.486 05:24:16 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:39.486 05:24:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.486 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.486 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.486 05:24:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.486 05:24:16 -- host/discovery.sh@55 -- # sort 00:28:39.486 05:24:16 -- host/discovery.sh@55 -- # xargs 00:28:39.486 [2024-04-24 05:24:16.709274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 [2024-04-24 05:24:16.709506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.709696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.709723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.709740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.709762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.709795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.709813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.709826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.709847] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.486 [2024-04-24 05:24:16.719353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 [2024-04-24 05:24:16.719558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.719743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.719771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.719787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.719809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.719855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.719875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.719889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.719908] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.486 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.486 [2024-04-24 05:24:16.729429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.486 [2024-04-24 05:24:16.729646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.729812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.486 [2024-04-24 05:24:16.729838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.486 [2024-04-24 05:24:16.729855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.486 [2024-04-24 05:24:16.729877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.486 [2024-04-24 05:24:16.729910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.486 [2024-04-24 05:24:16.729928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.486 [2024-04-24 05:24:16.729942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.486 [2024-04-24 05:24:16.729962] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.487 [2024-04-24 05:24:16.739507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.487 [2024-04-24 05:24:16.739745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.487 [2024-04-24 05:24:16.739903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.487 [2024-04-24 05:24:16.739929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e3730 with addr=10.0.0.2, port=4420 00:28:39.487 [2024-04-24 05:24:16.739945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3730 is same with the state(5) to be set 00:28:39.487 [2024-04-24 05:24:16.739968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e3730 (9): Bad file descriptor 00:28:39.487 [2024-04-24 05:24:16.740013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:39.487 [2024-04-24 05:24:16.740033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:39.487 [2024-04-24 05:24:16.740047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:39.487 [2024-04-24 05:24:16.740066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.487 [2024-04-24 05:24:16.741009] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:39.487 [2024-04-24 05:24:16.741038] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:39.487 05:24:16 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:39.487 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.487 05:24:16 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:39.487 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:39.487 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.487 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.487 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:39.487 05:24:16 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:39.487 05:24:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:39.487 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.487 05:24:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.487 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.487 05:24:16 -- host/discovery.sh@63 -- # sort -n 00:28:39.487 05:24:16 -- host/discovery.sh@63 -- # xargs 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.745 05:24:16 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:39.745 05:24:16 -- host/discovery.sh@79 -- # expected_count=0 00:28:39.745 05:24:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.745 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # notification_count=0 00:28:39.745 05:24:16 -- host/discovery.sh@75 -- # notify_id=2 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.745 05:24:16 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.745 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:39.745 05:24:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.745 05:24:16 -- host/discovery.sh@59 -- # sort 00:28:39.745 05:24:16 -- host/discovery.sh@59 -- # xargs 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.745 05:24:16 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.745 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:39.745 05:24:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.745 05:24:16 -- host/discovery.sh@55 -- # sort 00:28:39.745 05:24:16 -- host/discovery.sh@55 -- # xargs 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:28:39.745 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.745 05:24:16 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:39.745 05:24:16 -- host/discovery.sh@79 -- # expected_count=2 00:28:39.745 05:24:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:39.745 05:24:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.745 05:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.745 05:24:16 -- host/discovery.sh@74 -- # notification_count=2 00:28:39.745 05:24:16 -- host/discovery.sh@75 -- # notify_id=4 00:28:39.745 05:24:16 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:39.745 05:24:16 -- common/autotest_common.sh@904 -- # return 0 00:28:39.745 05:24:16 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:39.745 05:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.745 05:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:41.118 [2024-04-24 05:24:17.977915] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:41.118 [2024-04-24 05:24:17.977953] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:41.118 [2024-04-24 05:24:17.977978] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:41.118 [2024-04-24 05:24:18.064265] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:41.118 [2024-04-24 05:24:18.333370] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:41.118 [2024-04-24 05:24:18.333409] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:41.118 05:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.118 05:24:18 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.118 05:24:18 -- common/autotest_common.sh@638 -- # local es=0 00:28:41.118 05:24:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.118 05:24:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:41.118 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.118 05:24:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:41.118 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.118 05:24:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.118 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.118 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.118 request: 00:28:41.118 { 00:28:41.118 "name": "nvme", 00:28:41.118 "trtype": "tcp", 00:28:41.118 "traddr": "10.0.0.2", 00:28:41.118 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:41.118 "adrfam": "ipv4", 00:28:41.118 "trsvcid": "8009", 00:28:41.118 "wait_for_attach": true, 00:28:41.118 "method": "bdev_nvme_start_discovery", 00:28:41.118 "req_id": 1 00:28:41.118 } 00:28:41.118 Got JSON-RPC error response 00:28:41.118 response: 00:28:41.118 { 00:28:41.118 "code": -17, 00:28:41.118 "message": "File exists" 00:28:41.118 } 00:28:41.118 05:24:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:41.118 05:24:18 -- common/autotest_common.sh@641 -- # es=1 00:28:41.118 05:24:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:41.118 05:24:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:41.118 05:24:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:41.118 05:24:18 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:41.118 05:24:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:41.118 05:24:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:41.118 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.118 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.118 05:24:18 -- host/discovery.sh@67 -- # sort 00:28:41.118 05:24:18 -- host/discovery.sh@67 -- # xargs 00:28:41.118 05:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.377 05:24:18 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:41.377 05:24:18 -- host/discovery.sh@146 -- # get_bdev_list 00:28:41.377 05:24:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.377 05:24:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:41.377 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.377 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.377 05:24:18 -- host/discovery.sh@55 -- # sort 00:28:41.377 05:24:18 -- host/discovery.sh@55 -- # xargs 00:28:41.377 05:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.377 05:24:18 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:41.377 05:24:18 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.377 05:24:18 -- common/autotest_common.sh@638 -- # local es=0 00:28:41.377 05:24:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.377 05:24:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:41.377 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.377 05:24:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:41.377 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.377 05:24:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.377 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.377 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.377 request: 00:28:41.377 { 00:28:41.377 "name": "nvme_second", 00:28:41.377 "trtype": "tcp", 00:28:41.377 "traddr": "10.0.0.2", 00:28:41.377 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:41.377 "adrfam": "ipv4", 00:28:41.377 "trsvcid": "8009", 00:28:41.377 "wait_for_attach": true, 00:28:41.377 "method": "bdev_nvme_start_discovery", 00:28:41.377 "req_id": 1 00:28:41.377 } 00:28:41.377 Got JSON-RPC error response 00:28:41.377 response: 00:28:41.377 { 00:28:41.377 "code": -17, 00:28:41.377 "message": "File exists" 00:28:41.377 } 00:28:41.377 05:24:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:41.377 05:24:18 -- common/autotest_common.sh@641 -- # es=1 00:28:41.377 05:24:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:41.377 05:24:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:41.377 05:24:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:41.378 05:24:18 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:41.378 05:24:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:41.378 05:24:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:41.378 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.378 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.378 05:24:18 -- host/discovery.sh@67 -- # sort 00:28:41.378 05:24:18 -- host/discovery.sh@67 -- # xargs 00:28:41.378 05:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.378 05:24:18 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:41.378 05:24:18 -- host/discovery.sh@152 -- # get_bdev_list 00:28:41.378 05:24:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.378 05:24:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:41.378 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.378 05:24:18 -- host/discovery.sh@55 -- # sort 00:28:41.378 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.378 05:24:18 -- host/discovery.sh@55 -- # xargs 00:28:41.378 05:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.378 05:24:18 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:41.378 05:24:18 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:41.378 05:24:18 -- common/autotest_common.sh@638 -- # local es=0 00:28:41.378 05:24:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:41.378 05:24:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:41.378 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.378 05:24:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:41.378 05:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:41.378 05:24:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:41.378 05:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.378 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:28:42.316 [2024-04-24 05:24:19.545366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.316 [2024-04-24 05:24:19.545601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.316 [2024-04-24 05:24:19.545636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fa870 with addr=10.0.0.2, port=8010 00:28:42.316 [2024-04-24 05:24:19.545670] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:42.316 [2024-04-24 05:24:19.545687] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:42.316 [2024-04-24 05:24:19.545702] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:43.694 [2024-04-24 05:24:20.547764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.694 [2024-04-24 05:24:20.547978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.694 [2024-04-24 05:24:20.548005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1210dd0 with addr=10.0.0.2, port=8010 00:28:43.694 [2024-04-24 05:24:20.548051] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:43.694 [2024-04-24 05:24:20.548068] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:43.694 [2024-04-24 05:24:20.548082] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:44.632 [2024-04-24 05:24:21.549925] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:44.632 request: 00:28:44.632 { 00:28:44.632 "name": "nvme_second", 00:28:44.632 "trtype": "tcp", 00:28:44.632 "traddr": "10.0.0.2", 00:28:44.632 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:44.632 "adrfam": "ipv4", 00:28:44.632 "trsvcid": "8010", 00:28:44.632 "attach_timeout_ms": 3000, 00:28:44.632 "method": "bdev_nvme_start_discovery", 00:28:44.632 "req_id": 1 00:28:44.632 } 00:28:44.632 Got JSON-RPC error response 00:28:44.632 response: 00:28:44.632 { 00:28:44.632 "code": -110, 00:28:44.632 "message": "Connection timed out" 00:28:44.632 } 00:28:44.632 05:24:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:44.632 05:24:21 -- common/autotest_common.sh@641 -- # es=1 00:28:44.632 05:24:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:44.632 05:24:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:44.632 05:24:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:44.632 05:24:21 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:44.632 05:24:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:44.632 05:24:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:44.632 05:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:44.632 05:24:21 -- host/discovery.sh@67 -- # sort 00:28:44.632 05:24:21 -- common/autotest_common.sh@10 -- # set +x 00:28:44.632 05:24:21 -- host/discovery.sh@67 -- # xargs 00:28:44.632 05:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:44.632 05:24:21 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:44.632 05:24:21 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:44.632 05:24:21 -- host/discovery.sh@161 -- # kill 1991721 00:28:44.632 05:24:21 -- host/discovery.sh@162 -- # nvmftestfini 00:28:44.632 05:24:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:44.632 05:24:21 -- nvmf/common.sh@117 -- # sync 00:28:44.632 05:24:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.632 05:24:21 -- nvmf/common.sh@120 -- # set +e 00:28:44.632 05:24:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.632 05:24:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.632 rmmod nvme_tcp 00:28:44.632 rmmod nvme_fabrics 00:28:44.632 rmmod nvme_keyring 00:28:44.632 05:24:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.632 05:24:21 -- nvmf/common.sh@124 -- # set -e 00:28:44.632 05:24:21 -- nvmf/common.sh@125 -- # return 0 00:28:44.632 05:24:21 -- nvmf/common.sh@478 -- # '[' -n 1991696 ']' 00:28:44.632 05:24:21 -- nvmf/common.sh@479 -- # killprocess 1991696 00:28:44.632 05:24:21 -- common/autotest_common.sh@936 -- # '[' -z 1991696 ']' 00:28:44.632 05:24:21 -- common/autotest_common.sh@940 -- # kill -0 1991696 00:28:44.632 05:24:21 -- common/autotest_common.sh@941 -- # uname 00:28:44.632 05:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:44.632 05:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1991696 00:28:44.632 05:24:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:44.632 05:24:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:44.632 05:24:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1991696' 00:28:44.632 killing process with pid 1991696 00:28:44.632 05:24:21 -- common/autotest_common.sh@955 -- # kill 1991696 00:28:44.632 05:24:21 -- common/autotest_common.sh@960 -- # wait 1991696 00:28:44.893 05:24:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:44.893 05:24:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:44.893 05:24:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:44.893 05:24:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.893 05:24:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.893 05:24:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.893 05:24:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.893 05:24:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.830 05:24:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:46.830 00:28:46.830 real 0m14.325s 00:28:46.830 user 0m21.428s 00:28:46.830 sys 0m2.901s 00:28:46.830 05:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:46.830 05:24:23 -- common/autotest_common.sh@10 -- # set +x 00:28:46.830 ************************************ 00:28:46.830 END TEST nvmf_discovery 00:28:46.830 ************************************ 00:28:46.830 05:24:24 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:46.830 05:24:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:46.830 05:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:46.830 05:24:24 -- common/autotest_common.sh@10 -- # set +x 00:28:47.088 ************************************ 00:28:47.088 START TEST nvmf_discovery_remove_ifc 00:28:47.088 ************************************ 00:28:47.088 05:24:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:47.088 * Looking for test storage... 00:28:47.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.089 05:24:24 -- nvmf/common.sh@7 -- # uname -s 00:28:47.089 05:24:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.089 05:24:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.089 05:24:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.089 05:24:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.089 05:24:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.089 05:24:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.089 05:24:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.089 05:24:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.089 05:24:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.089 05:24:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.089 05:24:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.089 05:24:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.089 05:24:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.089 05:24:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.089 05:24:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.089 05:24:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.089 05:24:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.089 05:24:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.089 05:24:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.089 05:24:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.089 05:24:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.089 05:24:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.089 05:24:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.089 05:24:24 -- paths/export.sh@5 -- # export PATH 00:28:47.089 05:24:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.089 05:24:24 -- nvmf/common.sh@47 -- # : 0 00:28:47.089 05:24:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:47.089 05:24:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:47.089 05:24:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.089 05:24:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.089 05:24:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.089 05:24:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:47.089 05:24:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:47.089 05:24:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:47.089 05:24:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:47.089 05:24:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:47.089 05:24:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.089 05:24:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:47.089 05:24:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:47.089 05:24:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:47.089 05:24:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.089 05:24:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.089 05:24:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.089 05:24:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:47.089 05:24:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:47.089 05:24:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.089 05:24:24 -- common/autotest_common.sh@10 -- # set +x 00:28:48.993 05:24:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:48.993 05:24:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.993 05:24:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.993 05:24:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.993 05:24:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.993 05:24:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.993 05:24:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.993 05:24:26 -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.993 05:24:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.993 05:24:26 -- nvmf/common.sh@296 -- # e810=() 00:28:48.993 05:24:26 -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.993 05:24:26 -- nvmf/common.sh@297 -- # x722=() 00:28:48.993 05:24:26 -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.993 05:24:26 -- nvmf/common.sh@298 -- # mlx=() 00:28:48.993 05:24:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.993 05:24:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.993 05:24:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.993 05:24:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:48.993 05:24:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.993 05:24:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.993 05:24:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.993 05:24:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.993 05:24:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.993 05:24:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.993 05:24:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:48.993 05:24:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.993 05:24:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.993 05:24:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:48.993 05:24:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.993 05:24:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.993 05:24:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.993 05:24:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.993 05:24:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.993 05:24:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:48.993 05:24:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.993 05:24:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.993 05:24:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.993 05:24:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:48.993 05:24:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:48.994 05:24:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:48.994 05:24:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:48.994 05:24:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:48.994 05:24:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.994 05:24:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.994 05:24:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.994 05:24:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:48.994 05:24:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.994 05:24:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.994 05:24:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:48.994 05:24:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.994 05:24:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.994 05:24:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:48.994 05:24:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:48.994 05:24:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.994 05:24:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.994 05:24:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.994 05:24:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.994 05:24:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:48.994 05:24:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.994 05:24:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.994 05:24:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.994 05:24:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:48.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:28:48.994 00:28:48.994 --- 10.0.0.2 ping statistics --- 00:28:48.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.994 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:48.994 05:24:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:48.994 00:28:48.994 --- 10.0.0.1 ping statistics --- 00:28:48.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.994 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:48.994 05:24:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.994 05:24:26 -- nvmf/common.sh@411 -- # return 0 00:28:48.994 05:24:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:48.994 05:24:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.994 05:24:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:48.994 05:24:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:48.994 05:24:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.994 05:24:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:48.994 05:24:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:49.252 05:24:26 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:49.252 05:24:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:49.252 05:24:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:49.252 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.252 05:24:26 -- nvmf/common.sh@470 -- # nvmfpid=1995010 00:28:49.252 05:24:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:49.252 05:24:26 -- nvmf/common.sh@471 -- # waitforlisten 1995010 00:28:49.252 05:24:26 -- common/autotest_common.sh@817 -- # '[' -z 1995010 ']' 00:28:49.252 05:24:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.252 05:24:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:49.252 05:24:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.252 05:24:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:49.252 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.252 [2024-04-24 05:24:26.315741] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:49.252 [2024-04-24 05:24:26.315826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.252 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.252 [2024-04-24 05:24:26.351378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:49.252 [2024-04-24 05:24:26.381267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.252 [2024-04-24 05:24:26.471743] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.252 [2024-04-24 05:24:26.471791] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.252 [2024-04-24 05:24:26.471806] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.252 [2024-04-24 05:24:26.471818] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.252 [2024-04-24 05:24:26.471829] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.252 [2024-04-24 05:24:26.471859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.511 05:24:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:49.511 05:24:26 -- common/autotest_common.sh@850 -- # return 0 00:28:49.511 05:24:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:49.511 05:24:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:49.511 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 05:24:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.511 05:24:26 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:49.511 05:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.511 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 [2024-04-24 05:24:26.624788] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.511 [2024-04-24 05:24:26.632967] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:49.511 null0 00:28:49.511 [2024-04-24 05:24:26.664914] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.511 05:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.511 05:24:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1995035 00:28:49.511 05:24:26 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:49.511 05:24:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1995035 /tmp/host.sock 00:28:49.511 05:24:26 -- common/autotest_common.sh@817 -- # '[' -z 1995035 ']' 00:28:49.511 05:24:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:49.511 05:24:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:49.511 05:24:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:49.511 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:49.511 05:24:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:49.511 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 [2024-04-24 05:24:26.729367] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:28:49.511 [2024-04-24 05:24:26.729447] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995035 ] 00:28:49.511 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.511 [2024-04-24 05:24:26.762319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:49.769 [2024-04-24 05:24:26.793214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.769 [2024-04-24 05:24:26.882137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.769 05:24:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:49.769 05:24:26 -- common/autotest_common.sh@850 -- # return 0 00:28:49.769 05:24:26 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.769 05:24:26 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:49.769 05:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.769 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:49.769 05:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.769 05:24:26 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:49.769 05:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.769 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:28:50.028 05:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:50.028 05:24:27 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:50.028 05:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:50.028 05:24:27 -- common/autotest_common.sh@10 -- # set +x 00:28:50.967 [2024-04-24 05:24:28.143405] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:50.967 [2024-04-24 05:24:28.143442] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:50.967 [2024-04-24 05:24:28.143464] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:51.226 [2024-04-24 05:24:28.271895] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:51.226 [2024-04-24 05:24:28.495097] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:51.226 [2024-04-24 05:24:28.495164] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:51.226 [2024-04-24 05:24:28.495206] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:51.226 [2024-04-24 05:24:28.495231] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:51.226 [2024-04-24 05:24:28.495268] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:51.226 05:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.226 05:24:28 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:51.485 05:24:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.486 05:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.486 05:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.486 [2024-04-24 05:24:28.501363] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22520c0 was disconnected and freed. delete nvme_qpair. 00:28:51.486 05:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.486 05:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.486 05:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.486 05:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.486 05:24:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.422 05:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.422 05:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.422 05:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:52.422 05:24:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.800 05:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.800 05:24:30 -- common/autotest_common.sh@10 -- # set +x 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.800 05:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:53.800 05:24:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.741 05:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:54.741 05:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.741 05:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.741 05:24:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.681 05:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.681 05:24:32 -- common/autotest_common.sh@10 -- # set +x 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.681 05:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:55.681 05:24:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.619 05:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.619 05:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.619 05:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.619 05:24:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.879 [2024-04-24 05:24:33.936473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:56.879 [2024-04-24 05:24:33.936560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.879 [2024-04-24 05:24:33.936583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.879 [2024-04-24 05:24:33.936603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.879 [2024-04-24 05:24:33.936617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.879 [2024-04-24 05:24:33.936653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.879 [2024-04-24 05:24:33.936670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.879 [2024-04-24 05:24:33.936684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.879 [2024-04-24 05:24:33.936698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.879 [2024-04-24 05:24:33.936721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.879 [2024-04-24 05:24:33.936735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.879 [2024-04-24 05:24:33.936749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218400 is same with the state(5) to be set 00:28:56.879 [2024-04-24 05:24:33.946489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2218400 (9): Bad file descriptor 00:28:56.879 [2024-04-24 05:24:33.956534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:57.816 05:24:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.816 05:24:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.816 05:24:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.816 05:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.816 05:24:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.816 05:24:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.816 05:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.816 [2024-04-24 05:24:35.021671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:59.190 [2024-04-24 05:24:36.045708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:59.190 [2024-04-24 05:24:36.045765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2218400 with addr=10.0.0.2, port=4420 00:28:59.190 [2024-04-24 05:24:36.045791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218400 is same with the state(5) to be set 00:28:59.190 [2024-04-24 05:24:36.046244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2218400 (9): Bad file descriptor 00:28:59.190 [2024-04-24 05:24:36.046285] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.190 [2024-04-24 05:24:36.046319] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:59.190 [2024-04-24 05:24:36.046373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.190 [2024-04-24 05:24:36.046411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.190 [2024-04-24 05:24:36.046433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.190 [2024-04-24 05:24:36.046446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.190 [2024-04-24 05:24:36.046460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.190 [2024-04-24 05:24:36.046473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.190 [2024-04-24 05:24:36.046487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.190 [2024-04-24 05:24:36.046500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.190 [2024-04-24 05:24:36.046514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.190 [2024-04-24 05:24:36.046528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.190 [2024-04-24 05:24:36.046542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:59.190 [2024-04-24 05:24:36.046826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2218810 (9): Bad file descriptor 00:28:59.190 [2024-04-24 05:24:36.047851] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:59.190 [2024-04-24 05:24:36.047882] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:59.190 05:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.190 05:24:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:59.190 05:24:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.129 05:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.129 05:24:37 -- common/autotest_common.sh@10 -- # set +x 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.129 05:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.129 05:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.129 05:24:37 -- common/autotest_common.sh@10 -- # set +x 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.129 05:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:00.129 05:24:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.062 [2024-04-24 05:24:38.058887] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:01.062 [2024-04-24 05:24:38.058910] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:01.062 [2024-04-24 05:24:38.058947] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:01.062 [2024-04-24 05:24:38.185381] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.062 05:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.062 05:24:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.062 05:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:01.062 05:24:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.323 [2024-04-24 05:24:38.408829] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:01.323 [2024-04-24 05:24:38.408873] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:01.323 [2024-04-24 05:24:38.408903] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:01.323 [2024-04-24 05:24:38.408937] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:01.323 [2024-04-24 05:24:38.408952] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:01.323 [2024-04-24 05:24:38.417908] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x225c6e0 was disconnected and freed. delete nvme_qpair. 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.280 05:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.280 05:24:39 -- common/autotest_common.sh@10 -- # set +x 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.280 05:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1995035 00:29:02.280 05:24:39 -- common/autotest_common.sh@936 -- # '[' -z 1995035 ']' 00:29:02.280 05:24:39 -- common/autotest_common.sh@940 -- # kill -0 1995035 00:29:02.280 05:24:39 -- common/autotest_common.sh@941 -- # uname 00:29:02.280 05:24:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:02.280 05:24:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1995035 00:29:02.280 05:24:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:02.280 05:24:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:02.280 05:24:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1995035' 00:29:02.280 killing process with pid 1995035 00:29:02.280 05:24:39 -- common/autotest_common.sh@955 -- # kill 1995035 00:29:02.280 05:24:39 -- common/autotest_common.sh@960 -- # wait 1995035 00:29:02.280 05:24:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:02.280 05:24:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:02.280 05:24:39 -- nvmf/common.sh@117 -- # sync 00:29:02.280 05:24:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:02.280 05:24:39 -- nvmf/common.sh@120 -- # set +e 00:29:02.280 05:24:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:02.280 05:24:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:02.280 rmmod nvme_tcp 00:29:02.280 rmmod nvme_fabrics 00:29:02.280 rmmod nvme_keyring 00:29:02.280 05:24:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:02.538 05:24:39 -- nvmf/common.sh@124 -- # set -e 00:29:02.538 05:24:39 -- nvmf/common.sh@125 -- # return 0 00:29:02.538 05:24:39 -- nvmf/common.sh@478 -- # '[' -n 1995010 ']' 00:29:02.538 05:24:39 -- nvmf/common.sh@479 -- # killprocess 1995010 00:29:02.538 05:24:39 -- common/autotest_common.sh@936 -- # '[' -z 1995010 ']' 00:29:02.538 05:24:39 -- common/autotest_common.sh@940 -- # kill -0 1995010 00:29:02.538 05:24:39 -- common/autotest_common.sh@941 -- # uname 00:29:02.538 05:24:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:02.538 05:24:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1995010 00:29:02.538 05:24:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:02.538 05:24:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:02.538 05:24:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1995010' 00:29:02.538 killing process with pid 1995010 00:29:02.538 05:24:39 -- common/autotest_common.sh@955 -- # kill 1995010 00:29:02.538 05:24:39 -- common/autotest_common.sh@960 -- # wait 1995010 00:29:02.538 05:24:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:02.538 05:24:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:02.538 05:24:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:02.538 05:24:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.538 05:24:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:02.538 05:24:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.538 05:24:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.538 05:24:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.079 05:24:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:05.079 00:29:05.079 real 0m17.722s 00:29:05.079 user 0m24.801s 00:29:05.079 sys 0m2.983s 00:29:05.079 05:24:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:05.079 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:29:05.079 ************************************ 00:29:05.079 END TEST nvmf_discovery_remove_ifc 00:29:05.079 ************************************ 00:29:05.079 05:24:41 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:05.079 05:24:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:05.079 05:24:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:05.079 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:29:05.079 ************************************ 00:29:05.079 START TEST nvmf_identify_kernel_target 00:29:05.079 ************************************ 00:29:05.079 05:24:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:05.079 * Looking for test storage... 00:29:05.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.079 05:24:42 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.079 05:24:42 -- nvmf/common.sh@7 -- # uname -s 00:29:05.079 05:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.079 05:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.079 05:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.079 05:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.079 05:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.079 05:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.079 05:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.079 05:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.079 05:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.079 05:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.079 05:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.079 05:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.079 05:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.079 05:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.079 05:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.079 05:24:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.079 05:24:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.079 05:24:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.079 05:24:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.079 05:24:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.079 05:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.079 05:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.079 05:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.079 05:24:42 -- paths/export.sh@5 -- # export PATH 00:29:05.079 05:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.079 05:24:42 -- nvmf/common.sh@47 -- # : 0 00:29:05.079 05:24:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.079 05:24:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.079 05:24:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.079 05:24:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.079 05:24:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.080 05:24:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.080 05:24:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.080 05:24:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.080 05:24:42 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:05.080 05:24:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:05.080 05:24:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.080 05:24:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:05.080 05:24:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:05.080 05:24:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:05.080 05:24:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.080 05:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.080 05:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.080 05:24:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:05.080 05:24:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:05.080 05:24:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:05.080 05:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:06.978 05:24:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:06.978 05:24:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.978 05:24:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.978 05:24:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.978 05:24:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.978 05:24:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.978 05:24:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.978 05:24:43 -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.978 05:24:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.978 05:24:43 -- nvmf/common.sh@296 -- # e810=() 00:29:06.978 05:24:43 -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.978 05:24:43 -- nvmf/common.sh@297 -- # x722=() 00:29:06.978 05:24:43 -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.978 05:24:43 -- nvmf/common.sh@298 -- # mlx=() 00:29:06.978 05:24:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.978 05:24:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.978 05:24:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:06.978 05:24:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:06.978 05:24:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:06.978 05:24:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:06.978 05:24:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:06.978 05:24:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.979 05:24:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:06.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:06.979 05:24:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.979 05:24:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:06.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:06.979 05:24:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.979 05:24:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.979 05:24:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.979 05:24:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:06.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:06.979 05:24:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.979 05:24:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.979 05:24:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.979 05:24:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.979 05:24:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:06.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:06.979 05:24:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.979 05:24:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:06.979 05:24:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:06.979 05:24:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.979 05:24:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.979 05:24:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.979 05:24:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:06.979 05:24:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.979 05:24:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.979 05:24:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:06.979 05:24:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.979 05:24:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.979 05:24:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:06.979 05:24:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:06.979 05:24:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.979 05:24:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.979 05:24:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.979 05:24:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.979 05:24:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:06.979 05:24:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.979 05:24:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.979 05:24:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.979 05:24:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:06.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:29:06.979 00:29:06.979 --- 10.0.0.2 ping statistics --- 00:29:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.979 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:06.979 05:24:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:29:06.979 00:29:06.979 --- 10.0.0.1 ping statistics --- 00:29:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.979 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:29:06.979 05:24:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.979 05:24:43 -- nvmf/common.sh@411 -- # return 0 00:29:06.979 05:24:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:06.979 05:24:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.979 05:24:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:06.979 05:24:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.979 05:24:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:06.979 05:24:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:06.979 05:24:44 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:06.979 05:24:44 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:06.979 05:24:44 -- nvmf/common.sh@717 -- # local ip 00:29:06.979 05:24:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:06.979 05:24:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:06.979 05:24:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.979 05:24:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.979 05:24:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:06.979 05:24:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.979 05:24:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:06.979 05:24:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:06.979 05:24:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:06.979 05:24:44 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:06.979 05:24:44 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:06.979 05:24:44 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:06.979 05:24:44 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:06.979 05:24:44 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:06.979 05:24:44 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:06.979 05:24:44 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:06.979 05:24:44 -- nvmf/common.sh@628 -- # local block nvme 00:29:06.979 05:24:44 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:06.979 05:24:44 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:06.979 05:24:44 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:06.979 05:24:44 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:07.917 Waiting for block devices as requested 00:29:07.917 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:08.175 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:08.175 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:08.175 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:08.432 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:08.432 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:08.432 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:08.432 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:08.692 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:08.692 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:08.692 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:08.692 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:08.951 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:08.951 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:08.951 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:08.951 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:09.209 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:09.209 05:24:46 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:09.209 05:24:46 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:09.209 05:24:46 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:09.209 05:24:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:09.209 05:24:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:09.209 05:24:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:09.210 05:24:46 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:09.210 05:24:46 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:09.210 05:24:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:09.210 No valid GPT data, bailing 00:29:09.210 05:24:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:09.210 05:24:46 -- scripts/common.sh@391 -- # pt= 00:29:09.210 05:24:46 -- scripts/common.sh@392 -- # return 1 00:29:09.210 05:24:46 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:09.210 05:24:46 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:29:09.210 05:24:46 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:09.210 05:24:46 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:09.210 05:24:46 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:09.210 05:24:46 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:09.210 05:24:46 -- nvmf/common.sh@656 -- # echo 1 00:29:09.210 05:24:46 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:29:09.210 05:24:46 -- nvmf/common.sh@658 -- # echo 1 00:29:09.210 05:24:46 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:09.210 05:24:46 -- nvmf/common.sh@661 -- # echo tcp 00:29:09.210 05:24:46 -- nvmf/common.sh@662 -- # echo 4420 00:29:09.210 05:24:46 -- nvmf/common.sh@663 -- # echo ipv4 00:29:09.210 05:24:46 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:09.210 05:24:46 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:09.470 00:29:09.470 Discovery Log Number of Records 2, Generation counter 2 00:29:09.470 =====Discovery Log Entry 0====== 00:29:09.470 trtype: tcp 00:29:09.470 adrfam: ipv4 00:29:09.470 subtype: current discovery subsystem 00:29:09.470 treq: not specified, sq flow control disable supported 00:29:09.470 portid: 1 00:29:09.470 trsvcid: 4420 00:29:09.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:09.470 traddr: 10.0.0.1 00:29:09.470 eflags: none 00:29:09.470 sectype: none 00:29:09.470 =====Discovery Log Entry 1====== 00:29:09.470 trtype: tcp 00:29:09.470 adrfam: ipv4 00:29:09.470 subtype: nvme subsystem 00:29:09.470 treq: not specified, sq flow control disable supported 00:29:09.470 portid: 1 00:29:09.470 trsvcid: 4420 00:29:09.470 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:09.470 traddr: 10.0.0.1 00:29:09.470 eflags: none 00:29:09.470 sectype: none 00:29:09.470 05:24:46 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:09.470 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:09.470 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.470 ===================================================== 00:29:09.470 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:09.470 ===================================================== 00:29:09.470 Controller Capabilities/Features 00:29:09.470 ================================ 00:29:09.470 Vendor ID: 0000 00:29:09.470 Subsystem Vendor ID: 0000 00:29:09.470 Serial Number: 18029e31fecd965ab447 00:29:09.470 Model Number: Linux 00:29:09.470 Firmware Version: 6.7.0-68 00:29:09.470 Recommended Arb Burst: 0 00:29:09.470 IEEE OUI Identifier: 00 00 00 00:29:09.470 Multi-path I/O 00:29:09.470 May have multiple subsystem ports: No 00:29:09.470 May have multiple controllers: No 00:29:09.470 Associated with SR-IOV VF: No 00:29:09.470 Max Data Transfer Size: Unlimited 00:29:09.470 Max Number of Namespaces: 0 00:29:09.470 Max Number of I/O Queues: 1024 00:29:09.470 NVMe Specification Version (VS): 1.3 00:29:09.470 NVMe Specification Version (Identify): 1.3 00:29:09.470 Maximum Queue Entries: 1024 00:29:09.470 Contiguous Queues Required: No 00:29:09.470 Arbitration Mechanisms Supported 00:29:09.470 Weighted Round Robin: Not Supported 00:29:09.470 Vendor Specific: Not Supported 00:29:09.470 Reset Timeout: 7500 ms 00:29:09.470 Doorbell Stride: 4 bytes 00:29:09.470 NVM Subsystem Reset: Not Supported 00:29:09.470 Command Sets Supported 00:29:09.470 NVM Command Set: Supported 00:29:09.470 Boot Partition: Not Supported 00:29:09.470 Memory Page Size Minimum: 4096 bytes 00:29:09.470 Memory Page Size Maximum: 4096 bytes 00:29:09.470 Persistent Memory Region: Not Supported 00:29:09.470 Optional Asynchronous Events Supported 00:29:09.470 Namespace Attribute Notices: Not Supported 00:29:09.470 Firmware Activation Notices: Not Supported 00:29:09.470 ANA Change Notices: Not Supported 00:29:09.470 PLE Aggregate Log Change Notices: Not Supported 00:29:09.470 LBA Status Info Alert Notices: Not Supported 00:29:09.470 EGE Aggregate Log Change Notices: Not Supported 00:29:09.470 Normal NVM Subsystem Shutdown event: Not Supported 00:29:09.470 Zone Descriptor Change Notices: Not Supported 00:29:09.470 Discovery Log Change Notices: Supported 00:29:09.470 Controller Attributes 00:29:09.470 128-bit Host Identifier: Not Supported 00:29:09.470 Non-Operational Permissive Mode: Not Supported 00:29:09.470 NVM Sets: Not Supported 00:29:09.470 Read Recovery Levels: Not Supported 00:29:09.470 Endurance Groups: Not Supported 00:29:09.470 Predictable Latency Mode: Not Supported 00:29:09.470 Traffic Based Keep ALive: Not Supported 00:29:09.470 Namespace Granularity: Not Supported 00:29:09.470 SQ Associations: Not Supported 00:29:09.470 UUID List: Not Supported 00:29:09.470 Multi-Domain Subsystem: Not Supported 00:29:09.470 Fixed Capacity Management: Not Supported 00:29:09.470 Variable Capacity Management: Not Supported 00:29:09.470 Delete Endurance Group: Not Supported 00:29:09.470 Delete NVM Set: Not Supported 00:29:09.470 Extended LBA Formats Supported: Not Supported 00:29:09.470 Flexible Data Placement Supported: Not Supported 00:29:09.470 00:29:09.470 Controller Memory Buffer Support 00:29:09.470 ================================ 00:29:09.470 Supported: No 00:29:09.470 00:29:09.470 Persistent Memory Region Support 00:29:09.470 ================================ 00:29:09.470 Supported: No 00:29:09.470 00:29:09.470 Admin Command Set Attributes 00:29:09.470 ============================ 00:29:09.470 Security Send/Receive: Not Supported 00:29:09.470 Format NVM: Not Supported 00:29:09.470 Firmware Activate/Download: Not Supported 00:29:09.470 Namespace Management: Not Supported 00:29:09.470 Device Self-Test: Not Supported 00:29:09.470 Directives: Not Supported 00:29:09.470 NVMe-MI: Not Supported 00:29:09.470 Virtualization Management: Not Supported 00:29:09.470 Doorbell Buffer Config: Not Supported 00:29:09.470 Get LBA Status Capability: Not Supported 00:29:09.470 Command & Feature Lockdown Capability: Not Supported 00:29:09.470 Abort Command Limit: 1 00:29:09.470 Async Event Request Limit: 1 00:29:09.470 Number of Firmware Slots: N/A 00:29:09.470 Firmware Slot 1 Read-Only: N/A 00:29:09.470 Firmware Activation Without Reset: N/A 00:29:09.470 Multiple Update Detection Support: N/A 00:29:09.470 Firmware Update Granularity: No Information Provided 00:29:09.470 Per-Namespace SMART Log: No 00:29:09.470 Asymmetric Namespace Access Log Page: Not Supported 00:29:09.470 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:09.470 Command Effects Log Page: Not Supported 00:29:09.470 Get Log Page Extended Data: Supported 00:29:09.470 Telemetry Log Pages: Not Supported 00:29:09.470 Persistent Event Log Pages: Not Supported 00:29:09.470 Supported Log Pages Log Page: May Support 00:29:09.470 Commands Supported & Effects Log Page: Not Supported 00:29:09.470 Feature Identifiers & Effects Log Page:May Support 00:29:09.470 NVMe-MI Commands & Effects Log Page: May Support 00:29:09.470 Data Area 4 for Telemetry Log: Not Supported 00:29:09.470 Error Log Page Entries Supported: 1 00:29:09.470 Keep Alive: Not Supported 00:29:09.470 00:29:09.470 NVM Command Set Attributes 00:29:09.470 ========================== 00:29:09.470 Submission Queue Entry Size 00:29:09.470 Max: 1 00:29:09.470 Min: 1 00:29:09.470 Completion Queue Entry Size 00:29:09.470 Max: 1 00:29:09.470 Min: 1 00:29:09.470 Number of Namespaces: 0 00:29:09.470 Compare Command: Not Supported 00:29:09.470 Write Uncorrectable Command: Not Supported 00:29:09.470 Dataset Management Command: Not Supported 00:29:09.470 Write Zeroes Command: Not Supported 00:29:09.470 Set Features Save Field: Not Supported 00:29:09.470 Reservations: Not Supported 00:29:09.470 Timestamp: Not Supported 00:29:09.470 Copy: Not Supported 00:29:09.470 Volatile Write Cache: Not Present 00:29:09.470 Atomic Write Unit (Normal): 1 00:29:09.470 Atomic Write Unit (PFail): 1 00:29:09.470 Atomic Compare & Write Unit: 1 00:29:09.470 Fused Compare & Write: Not Supported 00:29:09.470 Scatter-Gather List 00:29:09.470 SGL Command Set: Supported 00:29:09.470 SGL Keyed: Not Supported 00:29:09.470 SGL Bit Bucket Descriptor: Not Supported 00:29:09.470 SGL Metadata Pointer: Not Supported 00:29:09.470 Oversized SGL: Not Supported 00:29:09.470 SGL Metadata Address: Not Supported 00:29:09.470 SGL Offset: Supported 00:29:09.471 Transport SGL Data Block: Not Supported 00:29:09.471 Replay Protected Memory Block: Not Supported 00:29:09.471 00:29:09.471 Firmware Slot Information 00:29:09.471 ========================= 00:29:09.471 Active slot: 0 00:29:09.471 00:29:09.471 00:29:09.471 Error Log 00:29:09.471 ========= 00:29:09.471 00:29:09.471 Active Namespaces 00:29:09.471 ================= 00:29:09.471 Discovery Log Page 00:29:09.471 ================== 00:29:09.471 Generation Counter: 2 00:29:09.471 Number of Records: 2 00:29:09.471 Record Format: 0 00:29:09.471 00:29:09.471 Discovery Log Entry 0 00:29:09.471 ---------------------- 00:29:09.471 Transport Type: 3 (TCP) 00:29:09.471 Address Family: 1 (IPv4) 00:29:09.471 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:09.471 Entry Flags: 00:29:09.471 Duplicate Returned Information: 0 00:29:09.471 Explicit Persistent Connection Support for Discovery: 0 00:29:09.471 Transport Requirements: 00:29:09.471 Secure Channel: Not Specified 00:29:09.471 Port ID: 1 (0x0001) 00:29:09.471 Controller ID: 65535 (0xffff) 00:29:09.471 Admin Max SQ Size: 32 00:29:09.471 Transport Service Identifier: 4420 00:29:09.471 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:09.471 Transport Address: 10.0.0.1 00:29:09.471 Discovery Log Entry 1 00:29:09.471 ---------------------- 00:29:09.471 Transport Type: 3 (TCP) 00:29:09.471 Address Family: 1 (IPv4) 00:29:09.471 Subsystem Type: 2 (NVM Subsystem) 00:29:09.471 Entry Flags: 00:29:09.471 Duplicate Returned Information: 0 00:29:09.471 Explicit Persistent Connection Support for Discovery: 0 00:29:09.471 Transport Requirements: 00:29:09.471 Secure Channel: Not Specified 00:29:09.471 Port ID: 1 (0x0001) 00:29:09.471 Controller ID: 65535 (0xffff) 00:29:09.471 Admin Max SQ Size: 32 00:29:09.471 Transport Service Identifier: 4420 00:29:09.471 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:09.471 Transport Address: 10.0.0.1 00:29:09.471 05:24:46 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:09.471 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.471 get_feature(0x01) failed 00:29:09.471 get_feature(0x02) failed 00:29:09.471 get_feature(0x04) failed 00:29:09.471 ===================================================== 00:29:09.471 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:09.471 ===================================================== 00:29:09.471 Controller Capabilities/Features 00:29:09.471 ================================ 00:29:09.471 Vendor ID: 0000 00:29:09.471 Subsystem Vendor ID: 0000 00:29:09.471 Serial Number: 47214407986405daa51d 00:29:09.471 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:09.471 Firmware Version: 6.7.0-68 00:29:09.471 Recommended Arb Burst: 6 00:29:09.471 IEEE OUI Identifier: 00 00 00 00:29:09.471 Multi-path I/O 00:29:09.471 May have multiple subsystem ports: Yes 00:29:09.471 May have multiple controllers: Yes 00:29:09.471 Associated with SR-IOV VF: No 00:29:09.471 Max Data Transfer Size: Unlimited 00:29:09.471 Max Number of Namespaces: 1024 00:29:09.471 Max Number of I/O Queues: 128 00:29:09.471 NVMe Specification Version (VS): 1.3 00:29:09.471 NVMe Specification Version (Identify): 1.3 00:29:09.471 Maximum Queue Entries: 1024 00:29:09.471 Contiguous Queues Required: No 00:29:09.471 Arbitration Mechanisms Supported 00:29:09.471 Weighted Round Robin: Not Supported 00:29:09.471 Vendor Specific: Not Supported 00:29:09.471 Reset Timeout: 7500 ms 00:29:09.471 Doorbell Stride: 4 bytes 00:29:09.471 NVM Subsystem Reset: Not Supported 00:29:09.471 Command Sets Supported 00:29:09.471 NVM Command Set: Supported 00:29:09.471 Boot Partition: Not Supported 00:29:09.471 Memory Page Size Minimum: 4096 bytes 00:29:09.471 Memory Page Size Maximum: 4096 bytes 00:29:09.471 Persistent Memory Region: Not Supported 00:29:09.471 Optional Asynchronous Events Supported 00:29:09.471 Namespace Attribute Notices: Supported 00:29:09.471 Firmware Activation Notices: Not Supported 00:29:09.471 ANA Change Notices: Supported 00:29:09.471 PLE Aggregate Log Change Notices: Not Supported 00:29:09.471 LBA Status Info Alert Notices: Not Supported 00:29:09.471 EGE Aggregate Log Change Notices: Not Supported 00:29:09.471 Normal NVM Subsystem Shutdown event: Not Supported 00:29:09.471 Zone Descriptor Change Notices: Not Supported 00:29:09.471 Discovery Log Change Notices: Not Supported 00:29:09.471 Controller Attributes 00:29:09.471 128-bit Host Identifier: Supported 00:29:09.471 Non-Operational Permissive Mode: Not Supported 00:29:09.471 NVM Sets: Not Supported 00:29:09.471 Read Recovery Levels: Not Supported 00:29:09.471 Endurance Groups: Not Supported 00:29:09.471 Predictable Latency Mode: Not Supported 00:29:09.471 Traffic Based Keep ALive: Supported 00:29:09.471 Namespace Granularity: Not Supported 00:29:09.471 SQ Associations: Not Supported 00:29:09.471 UUID List: Not Supported 00:29:09.471 Multi-Domain Subsystem: Not Supported 00:29:09.471 Fixed Capacity Management: Not Supported 00:29:09.471 Variable Capacity Management: Not Supported 00:29:09.471 Delete Endurance Group: Not Supported 00:29:09.471 Delete NVM Set: Not Supported 00:29:09.471 Extended LBA Formats Supported: Not Supported 00:29:09.471 Flexible Data Placement Supported: Not Supported 00:29:09.471 00:29:09.471 Controller Memory Buffer Support 00:29:09.471 ================================ 00:29:09.471 Supported: No 00:29:09.471 00:29:09.471 Persistent Memory Region Support 00:29:09.471 ================================ 00:29:09.471 Supported: No 00:29:09.471 00:29:09.471 Admin Command Set Attributes 00:29:09.471 ============================ 00:29:09.471 Security Send/Receive: Not Supported 00:29:09.471 Format NVM: Not Supported 00:29:09.471 Firmware Activate/Download: Not Supported 00:29:09.471 Namespace Management: Not Supported 00:29:09.471 Device Self-Test: Not Supported 00:29:09.471 Directives: Not Supported 00:29:09.471 NVMe-MI: Not Supported 00:29:09.471 Virtualization Management: Not Supported 00:29:09.471 Doorbell Buffer Config: Not Supported 00:29:09.471 Get LBA Status Capability: Not Supported 00:29:09.471 Command & Feature Lockdown Capability: Not Supported 00:29:09.471 Abort Command Limit: 4 00:29:09.471 Async Event Request Limit: 4 00:29:09.471 Number of Firmware Slots: N/A 00:29:09.471 Firmware Slot 1 Read-Only: N/A 00:29:09.471 Firmware Activation Without Reset: N/A 00:29:09.471 Multiple Update Detection Support: N/A 00:29:09.471 Firmware Update Granularity: No Information Provided 00:29:09.471 Per-Namespace SMART Log: Yes 00:29:09.471 Asymmetric Namespace Access Log Page: Supported 00:29:09.471 ANA Transition Time : 10 sec 00:29:09.471 00:29:09.471 Asymmetric Namespace Access Capabilities 00:29:09.471 ANA Optimized State : Supported 00:29:09.471 ANA Non-Optimized State : Supported 00:29:09.471 ANA Inaccessible State : Supported 00:29:09.471 ANA Persistent Loss State : Supported 00:29:09.471 ANA Change State : Supported 00:29:09.471 ANAGRPID is not changed : No 00:29:09.471 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:09.471 00:29:09.471 ANA Group Identifier Maximum : 128 00:29:09.471 Number of ANA Group Identifiers : 128 00:29:09.471 Max Number of Allowed Namespaces : 1024 00:29:09.471 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:09.471 Command Effects Log Page: Supported 00:29:09.471 Get Log Page Extended Data: Supported 00:29:09.471 Telemetry Log Pages: Not Supported 00:29:09.471 Persistent Event Log Pages: Not Supported 00:29:09.471 Supported Log Pages Log Page: May Support 00:29:09.471 Commands Supported & Effects Log Page: Not Supported 00:29:09.471 Feature Identifiers & Effects Log Page:May Support 00:29:09.471 NVMe-MI Commands & Effects Log Page: May Support 00:29:09.471 Data Area 4 for Telemetry Log: Not Supported 00:29:09.471 Error Log Page Entries Supported: 128 00:29:09.471 Keep Alive: Supported 00:29:09.471 Keep Alive Granularity: 1000 ms 00:29:09.471 00:29:09.471 NVM Command Set Attributes 00:29:09.471 ========================== 00:29:09.471 Submission Queue Entry Size 00:29:09.471 Max: 64 00:29:09.471 Min: 64 00:29:09.471 Completion Queue Entry Size 00:29:09.471 Max: 16 00:29:09.471 Min: 16 00:29:09.471 Number of Namespaces: 1024 00:29:09.471 Compare Command: Not Supported 00:29:09.471 Write Uncorrectable Command: Not Supported 00:29:09.471 Dataset Management Command: Supported 00:29:09.471 Write Zeroes Command: Supported 00:29:09.471 Set Features Save Field: Not Supported 00:29:09.471 Reservations: Not Supported 00:29:09.471 Timestamp: Not Supported 00:29:09.471 Copy: Not Supported 00:29:09.471 Volatile Write Cache: Present 00:29:09.471 Atomic Write Unit (Normal): 1 00:29:09.472 Atomic Write Unit (PFail): 1 00:29:09.472 Atomic Compare & Write Unit: 1 00:29:09.472 Fused Compare & Write: Not Supported 00:29:09.472 Scatter-Gather List 00:29:09.472 SGL Command Set: Supported 00:29:09.472 SGL Keyed: Not Supported 00:29:09.472 SGL Bit Bucket Descriptor: Not Supported 00:29:09.472 SGL Metadata Pointer: Not Supported 00:29:09.472 Oversized SGL: Not Supported 00:29:09.472 SGL Metadata Address: Not Supported 00:29:09.472 SGL Offset: Supported 00:29:09.472 Transport SGL Data Block: Not Supported 00:29:09.472 Replay Protected Memory Block: Not Supported 00:29:09.472 00:29:09.472 Firmware Slot Information 00:29:09.472 ========================= 00:29:09.472 Active slot: 0 00:29:09.472 00:29:09.472 Asymmetric Namespace Access 00:29:09.472 =========================== 00:29:09.472 Change Count : 0 00:29:09.472 Number of ANA Group Descriptors : 1 00:29:09.472 ANA Group Descriptor : 0 00:29:09.472 ANA Group ID : 1 00:29:09.472 Number of NSID Values : 1 00:29:09.472 Change Count : 0 00:29:09.472 ANA State : 1 00:29:09.472 Namespace Identifier : 1 00:29:09.472 00:29:09.472 Commands Supported and Effects 00:29:09.472 ============================== 00:29:09.472 Admin Commands 00:29:09.472 -------------- 00:29:09.472 Get Log Page (02h): Supported 00:29:09.472 Identify (06h): Supported 00:29:09.472 Abort (08h): Supported 00:29:09.472 Set Features (09h): Supported 00:29:09.472 Get Features (0Ah): Supported 00:29:09.472 Asynchronous Event Request (0Ch): Supported 00:29:09.472 Keep Alive (18h): Supported 00:29:09.472 I/O Commands 00:29:09.472 ------------ 00:29:09.472 Flush (00h): Supported 00:29:09.472 Write (01h): Supported LBA-Change 00:29:09.472 Read (02h): Supported 00:29:09.472 Write Zeroes (08h): Supported LBA-Change 00:29:09.472 Dataset Management (09h): Supported 00:29:09.472 00:29:09.472 Error Log 00:29:09.472 ========= 00:29:09.472 Entry: 0 00:29:09.472 Error Count: 0x3 00:29:09.472 Submission Queue Id: 0x0 00:29:09.472 Command Id: 0x5 00:29:09.472 Phase Bit: 0 00:29:09.472 Status Code: 0x2 00:29:09.472 Status Code Type: 0x0 00:29:09.472 Do Not Retry: 1 00:29:09.472 Error Location: 0x28 00:29:09.472 LBA: 0x0 00:29:09.472 Namespace: 0x0 00:29:09.472 Vendor Log Page: 0x0 00:29:09.472 ----------- 00:29:09.472 Entry: 1 00:29:09.472 Error Count: 0x2 00:29:09.472 Submission Queue Id: 0x0 00:29:09.472 Command Id: 0x5 00:29:09.472 Phase Bit: 0 00:29:09.472 Status Code: 0x2 00:29:09.472 Status Code Type: 0x0 00:29:09.472 Do Not Retry: 1 00:29:09.472 Error Location: 0x28 00:29:09.472 LBA: 0x0 00:29:09.472 Namespace: 0x0 00:29:09.472 Vendor Log Page: 0x0 00:29:09.472 ----------- 00:29:09.472 Entry: 2 00:29:09.472 Error Count: 0x1 00:29:09.472 Submission Queue Id: 0x0 00:29:09.472 Command Id: 0x4 00:29:09.472 Phase Bit: 0 00:29:09.472 Status Code: 0x2 00:29:09.472 Status Code Type: 0x0 00:29:09.472 Do Not Retry: 1 00:29:09.472 Error Location: 0x28 00:29:09.472 LBA: 0x0 00:29:09.472 Namespace: 0x0 00:29:09.472 Vendor Log Page: 0x0 00:29:09.472 00:29:09.472 Number of Queues 00:29:09.472 ================ 00:29:09.472 Number of I/O Submission Queues: 128 00:29:09.472 Number of I/O Completion Queues: 128 00:29:09.472 00:29:09.472 ZNS Specific Controller Data 00:29:09.472 ============================ 00:29:09.472 Zone Append Size Limit: 0 00:29:09.472 00:29:09.472 00:29:09.472 Active Namespaces 00:29:09.472 ================= 00:29:09.472 get_feature(0x05) failed 00:29:09.472 Namespace ID:1 00:29:09.472 Command Set Identifier: NVM (00h) 00:29:09.472 Deallocate: Supported 00:29:09.472 Deallocated/Unwritten Error: Not Supported 00:29:09.472 Deallocated Read Value: Unknown 00:29:09.472 Deallocate in Write Zeroes: Not Supported 00:29:09.472 Deallocated Guard Field: 0xFFFF 00:29:09.472 Flush: Supported 00:29:09.472 Reservation: Not Supported 00:29:09.472 Namespace Sharing Capabilities: Multiple Controllers 00:29:09.472 Size (in LBAs): 1953525168 (931GiB) 00:29:09.472 Capacity (in LBAs): 1953525168 (931GiB) 00:29:09.472 Utilization (in LBAs): 1953525168 (931GiB) 00:29:09.472 UUID: fff07c29-08b4-4e52-9559-3eab7f894eab 00:29:09.472 Thin Provisioning: Not Supported 00:29:09.472 Per-NS Atomic Units: Yes 00:29:09.472 Atomic Boundary Size (Normal): 0 00:29:09.472 Atomic Boundary Size (PFail): 0 00:29:09.472 Atomic Boundary Offset: 0 00:29:09.472 NGUID/EUI64 Never Reused: No 00:29:09.472 ANA group ID: 1 00:29:09.472 Namespace Write Protected: No 00:29:09.472 Number of LBA Formats: 1 00:29:09.472 Current LBA Format: LBA Format #00 00:29:09.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:09.472 00:29:09.472 05:24:46 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:09.472 05:24:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:09.472 05:24:46 -- nvmf/common.sh@117 -- # sync 00:29:09.472 05:24:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.472 05:24:46 -- nvmf/common.sh@120 -- # set +e 00:29:09.472 05:24:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.472 05:24:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.472 rmmod nvme_tcp 00:29:09.472 rmmod nvme_fabrics 00:29:09.472 05:24:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.472 05:24:46 -- nvmf/common.sh@124 -- # set -e 00:29:09.472 05:24:46 -- nvmf/common.sh@125 -- # return 0 00:29:09.472 05:24:46 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:29:09.472 05:24:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:09.472 05:24:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:09.472 05:24:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:09.472 05:24:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.472 05:24:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.472 05:24:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.472 05:24:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.472 05:24:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.012 05:24:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.012 05:24:48 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:12.012 05:24:48 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:12.012 05:24:48 -- nvmf/common.sh@675 -- # echo 0 00:29:12.012 05:24:48 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:12.012 05:24:48 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:12.012 05:24:48 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:12.012 05:24:48 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:12.012 05:24:48 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:12.012 05:24:48 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:12.012 05:24:48 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:12.950 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:12.950 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:12.950 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:13.889 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:14.148 00:29:14.148 real 0m9.213s 00:29:14.148 user 0m1.871s 00:29:14.148 sys 0m3.310s 00:29:14.148 05:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:14.148 05:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.148 ************************************ 00:29:14.148 END TEST nvmf_identify_kernel_target 00:29:14.148 ************************************ 00:29:14.148 05:24:51 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:14.148 05:24:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:14.148 05:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.148 05:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.148 ************************************ 00:29:14.148 START TEST nvmf_auth 00:29:14.148 ************************************ 00:29:14.148 05:24:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:14.148 * Looking for test storage... 00:29:14.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.148 05:24:51 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.148 05:24:51 -- nvmf/common.sh@7 -- # uname -s 00:29:14.148 05:24:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.148 05:24:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.148 05:24:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.148 05:24:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.148 05:24:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.148 05:24:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.148 05:24:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.148 05:24:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.148 05:24:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.148 05:24:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.148 05:24:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.148 05:24:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.148 05:24:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.148 05:24:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.148 05:24:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.148 05:24:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.148 05:24:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.149 05:24:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.149 05:24:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.149 05:24:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.149 05:24:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.149 05:24:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.149 05:24:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.149 05:24:51 -- paths/export.sh@5 -- # export PATH 00:29:14.149 05:24:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.149 05:24:51 -- nvmf/common.sh@47 -- # : 0 00:29:14.149 05:24:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.149 05:24:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.149 05:24:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.149 05:24:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.149 05:24:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.149 05:24:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.149 05:24:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.149 05:24:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.149 05:24:51 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:14.149 05:24:51 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:14.149 05:24:51 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:14.149 05:24:51 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:14.149 05:24:51 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:14.149 05:24:51 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:14.149 05:24:51 -- host/auth.sh@21 -- # keys=() 00:29:14.149 05:24:51 -- host/auth.sh@77 -- # nvmftestinit 00:29:14.149 05:24:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:14.149 05:24:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.149 05:24:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:14.149 05:24:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:14.149 05:24:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:14.149 05:24:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.149 05:24:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.149 05:24:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.149 05:24:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:14.149 05:24:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:14.149 05:24:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.149 05:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:16.051 05:24:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:16.051 05:24:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.051 05:24:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.051 05:24:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.051 05:24:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.051 05:24:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.051 05:24:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.051 05:24:53 -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.051 05:24:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.051 05:24:53 -- nvmf/common.sh@296 -- # e810=() 00:29:16.051 05:24:53 -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.051 05:24:53 -- nvmf/common.sh@297 -- # x722=() 00:29:16.051 05:24:53 -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.051 05:24:53 -- nvmf/common.sh@298 -- # mlx=() 00:29:16.051 05:24:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.051 05:24:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.051 05:24:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.052 05:24:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.052 05:24:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.052 05:24:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.052 05:24:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.052 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.052 05:24:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.052 05:24:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.052 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.052 05:24:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.052 05:24:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.052 05:24:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.052 05:24:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.052 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.052 05:24:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.052 05:24:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.052 05:24:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.052 05:24:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.052 05:24:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.052 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.052 05:24:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.052 05:24:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:16.052 05:24:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:16.052 05:24:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:16.052 05:24:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.052 05:24:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.052 05:24:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.052 05:24:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.052 05:24:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.052 05:24:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.052 05:24:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.052 05:24:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.052 05:24:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.052 05:24:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.052 05:24:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.052 05:24:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.052 05:24:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.310 05:24:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.310 05:24:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.310 05:24:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.310 05:24:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.310 05:24:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.310 05:24:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.310 05:24:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:16.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:16.310 00:29:16.310 --- 10.0.0.2 ping statistics --- 00:29:16.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.310 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:16.310 05:24:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:29:16.310 00:29:16.310 --- 10.0.0.1 ping statistics --- 00:29:16.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.310 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:29:16.310 05:24:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.310 05:24:53 -- nvmf/common.sh@411 -- # return 0 00:29:16.310 05:24:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:16.310 05:24:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.310 05:24:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:16.310 05:24:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:16.310 05:24:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.310 05:24:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:16.310 05:24:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:16.310 05:24:53 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:29:16.310 05:24:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:16.310 05:24:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:16.310 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:29:16.310 05:24:53 -- nvmf/common.sh@470 -- # nvmfpid=2002231 00:29:16.310 05:24:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:16.310 05:24:53 -- nvmf/common.sh@471 -- # waitforlisten 2002231 00:29:16.310 05:24:53 -- common/autotest_common.sh@817 -- # '[' -z 2002231 ']' 00:29:16.310 05:24:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.310 05:24:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:16.310 05:24:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.310 05:24:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:16.310 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:29:16.568 05:24:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:16.568 05:24:53 -- common/autotest_common.sh@850 -- # return 0 00:29:16.568 05:24:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:16.568 05:24:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:16.568 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:29:16.568 05:24:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.568 05:24:53 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:16.568 05:24:53 -- host/auth.sh@81 -- # gen_key null 32 00:29:16.568 05:24:53 -- host/auth.sh@53 -- # local digest len file key 00:29:16.568 05:24:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:16.568 05:24:53 -- host/auth.sh@54 -- # local -A digests 00:29:16.568 05:24:53 -- host/auth.sh@56 -- # digest=null 00:29:16.568 05:24:53 -- host/auth.sh@56 -- # len=32 00:29:16.568 05:24:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:16.568 05:24:53 -- host/auth.sh@57 -- # key=70a2751e1dc453b9e49ee01d283fe63b 00:29:16.568 05:24:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:29:16.568 05:24:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.qT0 00:29:16.568 05:24:53 -- host/auth.sh@59 -- # format_dhchap_key 70a2751e1dc453b9e49ee01d283fe63b 0 00:29:16.568 05:24:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 70a2751e1dc453b9e49ee01d283fe63b 0 00:29:16.568 05:24:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # key=70a2751e1dc453b9e49ee01d283fe63b 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # digest=0 00:29:16.568 05:24:53 -- nvmf/common.sh@694 -- # python - 00:29:16.568 05:24:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.qT0 00:29:16.568 05:24:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.qT0 00:29:16.568 05:24:53 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.qT0 00:29:16.568 05:24:53 -- host/auth.sh@82 -- # gen_key null 48 00:29:16.568 05:24:53 -- host/auth.sh@53 -- # local digest len file key 00:29:16.568 05:24:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:16.568 05:24:53 -- host/auth.sh@54 -- # local -A digests 00:29:16.568 05:24:53 -- host/auth.sh@56 -- # digest=null 00:29:16.568 05:24:53 -- host/auth.sh@56 -- # len=48 00:29:16.568 05:24:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:16.568 05:24:53 -- host/auth.sh@57 -- # key=3c72344ea018d2a5a052fe084f02a7aa29ba285ee4a24ec3 00:29:16.568 05:24:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:29:16.568 05:24:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.FX4 00:29:16.568 05:24:53 -- host/auth.sh@59 -- # format_dhchap_key 3c72344ea018d2a5a052fe084f02a7aa29ba285ee4a24ec3 0 00:29:16.568 05:24:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 3c72344ea018d2a5a052fe084f02a7aa29ba285ee4a24ec3 0 00:29:16.568 05:24:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # key=3c72344ea018d2a5a052fe084f02a7aa29ba285ee4a24ec3 00:29:16.568 05:24:53 -- nvmf/common.sh@693 -- # digest=0 00:29:16.568 05:24:53 -- nvmf/common.sh@694 -- # python - 00:29:16.568 05:24:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.FX4 00:29:16.827 05:24:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.FX4 00:29:16.827 05:24:53 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.FX4 00:29:16.827 05:24:53 -- host/auth.sh@83 -- # gen_key sha256 32 00:29:16.827 05:24:53 -- host/auth.sh@53 -- # local digest len file key 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # local -A digests 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # digest=sha256 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # len=32 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # key=4b96136b827ed4f8d13f4ace5e321001 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.G31 00:29:16.827 05:24:53 -- host/auth.sh@59 -- # format_dhchap_key 4b96136b827ed4f8d13f4ace5e321001 1 00:29:16.827 05:24:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 4b96136b827ed4f8d13f4ace5e321001 1 00:29:16.827 05:24:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # key=4b96136b827ed4f8d13f4ace5e321001 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # digest=1 00:29:16.827 05:24:53 -- nvmf/common.sh@694 -- # python - 00:29:16.827 05:24:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.G31 00:29:16.827 05:24:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.G31 00:29:16.827 05:24:53 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.G31 00:29:16.827 05:24:53 -- host/auth.sh@84 -- # gen_key sha384 48 00:29:16.827 05:24:53 -- host/auth.sh@53 -- # local digest len file key 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # local -A digests 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # digest=sha384 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # len=48 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # key=434db081574f46bb0d680bc33671c0e963f3c03937ccf95f 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.vK3 00:29:16.827 05:24:53 -- host/auth.sh@59 -- # format_dhchap_key 434db081574f46bb0d680bc33671c0e963f3c03937ccf95f 2 00:29:16.827 05:24:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 434db081574f46bb0d680bc33671c0e963f3c03937ccf95f 2 00:29:16.827 05:24:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # key=434db081574f46bb0d680bc33671c0e963f3c03937ccf95f 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # digest=2 00:29:16.827 05:24:53 -- nvmf/common.sh@694 -- # python - 00:29:16.827 05:24:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.vK3 00:29:16.827 05:24:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.vK3 00:29:16.827 05:24:53 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.vK3 00:29:16.827 05:24:53 -- host/auth.sh@85 -- # gen_key sha512 64 00:29:16.827 05:24:53 -- host/auth.sh@53 -- # local digest len file key 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:16.827 05:24:53 -- host/auth.sh@54 -- # local -A digests 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # digest=sha512 00:29:16.827 05:24:53 -- host/auth.sh@56 -- # len=64 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:16.827 05:24:53 -- host/auth.sh@57 -- # key=041d9b4174025419d4053c77087cb5316615d26257fd37833ef40ec4b6d6e20e 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:29:16.827 05:24:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.72M 00:29:16.827 05:24:53 -- host/auth.sh@59 -- # format_dhchap_key 041d9b4174025419d4053c77087cb5316615d26257fd37833ef40ec4b6d6e20e 3 00:29:16.827 05:24:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 041d9b4174025419d4053c77087cb5316615d26257fd37833ef40ec4b6d6e20e 3 00:29:16.827 05:24:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # key=041d9b4174025419d4053c77087cb5316615d26257fd37833ef40ec4b6d6e20e 00:29:16.827 05:24:53 -- nvmf/common.sh@693 -- # digest=3 00:29:16.827 05:24:53 -- nvmf/common.sh@694 -- # python - 00:29:16.827 05:24:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.72M 00:29:16.827 05:24:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.72M 00:29:16.827 05:24:53 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.72M 00:29:16.827 05:24:53 -- host/auth.sh@87 -- # waitforlisten 2002231 00:29:16.827 05:24:53 -- common/autotest_common.sh@817 -- # '[' -z 2002231 ']' 00:29:16.827 05:24:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.827 05:24:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:16.827 05:24:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.827 05:24:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:16.827 05:24:53 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:17.087 05:24:54 -- common/autotest_common.sh@850 -- # return 0 00:29:17.087 05:24:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:17.087 05:24:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qT0 00:29:17.087 05:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.087 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.087 05:24:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:17.087 05:24:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FX4 00:29:17.087 05:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.087 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.087 05:24:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:17.087 05:24:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.G31 00:29:17.087 05:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.087 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.087 05:24:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:17.087 05:24:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vK3 00:29:17.087 05:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.087 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.087 05:24:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:17.087 05:24:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.72M 00:29:17.087 05:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.087 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 05:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.087 05:24:54 -- host/auth.sh@92 -- # nvmet_auth_init 00:29:17.087 05:24:54 -- host/auth.sh@35 -- # get_main_ns_ip 00:29:17.087 05:24:54 -- nvmf/common.sh@717 -- # local ip 00:29:17.087 05:24:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:17.087 05:24:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:17.087 05:24:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.087 05:24:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.087 05:24:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:17.087 05:24:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.087 05:24:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:17.087 05:24:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:17.087 05:24:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:17.087 05:24:54 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:17.087 05:24:54 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:17.087 05:24:54 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:17.087 05:24:54 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:17.087 05:24:54 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:17.087 05:24:54 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:17.087 05:24:54 -- nvmf/common.sh@628 -- # local block nvme 00:29:17.087 05:24:54 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:17.087 05:24:54 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:17.087 05:24:54 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:17.087 05:24:54 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:18.497 Waiting for block devices as requested 00:29:18.497 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:18.497 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:18.497 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:18.757 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:18.757 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:18.757 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:19.018 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:19.018 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:19.018 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:19.018 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:19.278 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:19.278 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:19.278 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:19.278 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:19.538 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:19.538 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:19.538 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:20.105 05:24:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:20.105 05:24:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:20.105 05:24:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:20.106 05:24:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:20.106 05:24:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:20.106 05:24:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:20.106 05:24:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:20.106 05:24:57 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:20.106 05:24:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:20.106 No valid GPT data, bailing 00:29:20.106 05:24:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:20.106 05:24:57 -- scripts/common.sh@391 -- # pt= 00:29:20.106 05:24:57 -- scripts/common.sh@392 -- # return 1 00:29:20.106 05:24:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:20.106 05:24:57 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:29:20.106 05:24:57 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:20.106 05:24:57 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:20.106 05:24:57 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:20.106 05:24:57 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:20.106 05:24:57 -- nvmf/common.sh@656 -- # echo 1 00:29:20.106 05:24:57 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:29:20.106 05:24:57 -- nvmf/common.sh@658 -- # echo 1 00:29:20.106 05:24:57 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:20.106 05:24:57 -- nvmf/common.sh@661 -- # echo tcp 00:29:20.106 05:24:57 -- nvmf/common.sh@662 -- # echo 4420 00:29:20.106 05:24:57 -- nvmf/common.sh@663 -- # echo ipv4 00:29:20.106 05:24:57 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:20.106 05:24:57 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:20.106 00:29:20.106 Discovery Log Number of Records 2, Generation counter 2 00:29:20.106 =====Discovery Log Entry 0====== 00:29:20.106 trtype: tcp 00:29:20.106 adrfam: ipv4 00:29:20.106 subtype: current discovery subsystem 00:29:20.106 treq: not specified, sq flow control disable supported 00:29:20.106 portid: 1 00:29:20.106 trsvcid: 4420 00:29:20.106 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:20.106 traddr: 10.0.0.1 00:29:20.106 eflags: none 00:29:20.106 sectype: none 00:29:20.106 =====Discovery Log Entry 1====== 00:29:20.106 trtype: tcp 00:29:20.106 adrfam: ipv4 00:29:20.106 subtype: nvme subsystem 00:29:20.106 treq: not specified, sq flow control disable supported 00:29:20.106 portid: 1 00:29:20.106 trsvcid: 4420 00:29:20.106 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:20.106 traddr: 10.0.0.1 00:29:20.106 eflags: none 00:29:20.106 sectype: none 00:29:20.106 05:24:57 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:20.106 05:24:57 -- host/auth.sh@37 -- # echo 0 00:29:20.106 05:24:57 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:20.106 05:24:57 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.106 05:24:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.106 05:24:57 -- host/auth.sh@44 -- # digest=sha256 00:29:20.106 05:24:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.106 05:24:57 -- host/auth.sh@44 -- # keyid=1 00:29:20.106 05:24:57 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:20.106 05:24:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:20.106 05:24:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:20.106 05:24:57 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:20.106 05:24:57 -- host/auth.sh@100 -- # IFS=, 00:29:20.106 05:24:57 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:29:20.106 05:24:57 -- host/auth.sh@100 -- # IFS=, 00:29:20.106 05:24:57 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.106 05:24:57 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:20.106 05:24:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.106 05:24:57 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:29:20.106 05:24:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.106 05:24:57 -- host/auth.sh@68 -- # keyid=1 00:29:20.106 05:24:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.106 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.106 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.106 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.106 05:24:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.106 05:24:57 -- nvmf/common.sh@717 -- # local ip 00:29:20.106 05:24:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.106 05:24:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.106 05:24:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.106 05:24:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.106 05:24:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.106 05:24:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.106 05:24:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.106 05:24:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.106 05:24:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.106 05:24:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:20.106 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.106 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.367 nvme0n1 00:29:20.367 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.367 05:24:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.367 05:24:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:20.367 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.367 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.367 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.367 05:24:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.367 05:24:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.367 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.367 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.367 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.367 05:24:57 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:29:20.367 05:24:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.367 05:24:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:20.367 05:24:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:20.367 05:24:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.367 05:24:57 -- host/auth.sh@44 -- # digest=sha256 00:29:20.367 05:24:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.367 05:24:57 -- host/auth.sh@44 -- # keyid=0 00:29:20.367 05:24:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:20.367 05:24:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:20.367 05:24:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:20.367 05:24:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:20.367 05:24:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:29:20.367 05:24:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.367 05:24:57 -- host/auth.sh@68 -- # digest=sha256 00:29:20.367 05:24:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:20.367 05:24:57 -- host/auth.sh@68 -- # keyid=0 00:29:20.367 05:24:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:20.367 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.367 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.367 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.367 05:24:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.367 05:24:57 -- nvmf/common.sh@717 -- # local ip 00:29:20.367 05:24:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.367 05:24:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.367 05:24:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.367 05:24:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.367 05:24:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.367 05:24:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.367 05:24:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.367 05:24:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.367 05:24:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.367 05:24:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:20.367 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.367 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.626 nvme0n1 00:29:20.626 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.626 05:24:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.626 05:24:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:20.626 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.626 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.626 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.626 05:24:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.626 05:24:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.626 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.626 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.626 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.626 05:24:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:20.626 05:24:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.626 05:24:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.626 05:24:57 -- host/auth.sh@44 -- # digest=sha256 00:29:20.626 05:24:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.626 05:24:57 -- host/auth.sh@44 -- # keyid=1 00:29:20.626 05:24:57 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:20.626 05:24:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:20.626 05:24:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:20.626 05:24:57 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:20.626 05:24:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:29:20.626 05:24:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.626 05:24:57 -- host/auth.sh@68 -- # digest=sha256 00:29:20.627 05:24:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:20.627 05:24:57 -- host/auth.sh@68 -- # keyid=1 00:29:20.627 05:24:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:20.627 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.627 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.627 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.627 05:24:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.627 05:24:57 -- nvmf/common.sh@717 -- # local ip 00:29:20.627 05:24:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.627 05:24:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.627 05:24:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.627 05:24:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.627 05:24:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.627 05:24:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.627 05:24:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.627 05:24:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.627 05:24:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.627 05:24:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:20.627 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.627 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.887 nvme0n1 00:29:20.887 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.887 05:24:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.887 05:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.887 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:29:20.887 05:24:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:20.887 05:24:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.887 05:24:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.887 05:24:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.887 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.887 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:20.887 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.887 05:24:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:20.887 05:24:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:20.887 05:24:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.887 05:24:58 -- host/auth.sh@44 -- # digest=sha256 00:29:20.887 05:24:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.887 05:24:58 -- host/auth.sh@44 -- # keyid=2 00:29:20.887 05:24:58 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:20.887 05:24:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:20.887 05:24:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:20.887 05:24:58 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:20.887 05:24:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:29:20.887 05:24:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.887 05:24:58 -- host/auth.sh@68 -- # digest=sha256 00:29:20.887 05:24:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:20.887 05:24:58 -- host/auth.sh@68 -- # keyid=2 00:29:20.887 05:24:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:20.887 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.887 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:20.887 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.887 05:24:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.887 05:24:58 -- nvmf/common.sh@717 -- # local ip 00:29:20.887 05:24:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.887 05:24:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.887 05:24:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.887 05:24:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.887 05:24:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.887 05:24:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.887 05:24:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.887 05:24:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.887 05:24:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.887 05:24:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:20.887 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.887 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.146 nvme0n1 00:29:21.146 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.146 05:24:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.146 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.146 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.146 05:24:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.146 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.146 05:24:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.146 05:24:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.146 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.146 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.146 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.146 05:24:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.146 05:24:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:21.146 05:24:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.146 05:24:58 -- host/auth.sh@44 -- # digest=sha256 00:29:21.146 05:24:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:21.146 05:24:58 -- host/auth.sh@44 -- # keyid=3 00:29:21.146 05:24:58 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:21.146 05:24:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:21.146 05:24:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:21.146 05:24:58 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:21.146 05:24:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:29:21.146 05:24:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.146 05:24:58 -- host/auth.sh@68 -- # digest=sha256 00:29:21.146 05:24:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:21.146 05:24:58 -- host/auth.sh@68 -- # keyid=3 00:29:21.146 05:24:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:21.146 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.146 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.147 05:24:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.147 05:24:58 -- nvmf/common.sh@717 -- # local ip 00:29:21.147 05:24:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.147 05:24:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.147 05:24:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.147 05:24:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.147 05:24:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.147 05:24:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.147 05:24:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.147 05:24:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.147 05:24:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.147 05:24:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:21.147 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.147 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 nvme0n1 00:29:21.147 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.147 05:24:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.147 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.147 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 05:24:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.405 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.405 05:24:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.405 05:24:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.405 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.405 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.405 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.405 05:24:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.405 05:24:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:21.405 05:24:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.405 05:24:58 -- host/auth.sh@44 -- # digest=sha256 00:29:21.405 05:24:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:21.405 05:24:58 -- host/auth.sh@44 -- # keyid=4 00:29:21.405 05:24:58 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:21.405 05:24:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:21.405 05:24:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:21.405 05:24:58 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:21.405 05:24:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:29:21.405 05:24:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.405 05:24:58 -- host/auth.sh@68 -- # digest=sha256 00:29:21.405 05:24:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:21.405 05:24:58 -- host/auth.sh@68 -- # keyid=4 00:29:21.405 05:24:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:21.405 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.405 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.405 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.405 05:24:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.405 05:24:58 -- nvmf/common.sh@717 -- # local ip 00:29:21.405 05:24:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.405 05:24:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.405 05:24:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.405 05:24:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.405 05:24:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.405 05:24:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.405 05:24:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.405 05:24:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.405 05:24:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.406 05:24:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.406 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.406 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 nvme0n1 00:29:21.406 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.406 05:24:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.406 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.406 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 05:24:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.406 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.663 05:24:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.663 05:24:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.663 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.663 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.663 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.663 05:24:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.663 05:24:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.663 05:24:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:21.663 05:24:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.663 05:24:58 -- host/auth.sh@44 -- # digest=sha256 00:29:21.663 05:24:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.663 05:24:58 -- host/auth.sh@44 -- # keyid=0 00:29:21.663 05:24:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:21.663 05:24:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:21.663 05:24:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:21.663 05:24:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:21.663 05:24:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:29:21.663 05:24:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.663 05:24:58 -- host/auth.sh@68 -- # digest=sha256 00:29:21.663 05:24:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:21.663 05:24:58 -- host/auth.sh@68 -- # keyid=0 00:29:21.663 05:24:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:21.663 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.663 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.663 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.663 05:24:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.663 05:24:58 -- nvmf/common.sh@717 -- # local ip 00:29:21.663 05:24:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.663 05:24:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.663 05:24:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.664 05:24:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.664 05:24:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.664 05:24:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.664 05:24:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.664 05:24:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.664 05:24:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.664 05:24:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:21.664 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.664 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.664 nvme0n1 00:29:21.664 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.664 05:24:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.664 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.664 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.664 05:24:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.664 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.921 05:24:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.921 05:24:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.921 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.921 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.921 05:24:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.921 05:24:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:21.921 05:24:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.921 05:24:58 -- host/auth.sh@44 -- # digest=sha256 00:29:21.921 05:24:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.921 05:24:58 -- host/auth.sh@44 -- # keyid=1 00:29:21.921 05:24:58 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:21.921 05:24:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:21.921 05:24:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:21.921 05:24:58 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:21.921 05:24:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:29:21.921 05:24:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.921 05:24:58 -- host/auth.sh@68 -- # digest=sha256 00:29:21.921 05:24:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:21.921 05:24:58 -- host/auth.sh@68 -- # keyid=1 00:29:21.921 05:24:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:21.921 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.921 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 05:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.921 05:24:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.921 05:24:58 -- nvmf/common.sh@717 -- # local ip 00:29:21.921 05:24:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.921 05:24:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.921 05:24:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.921 05:24:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.921 05:24:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.921 05:24:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.921 05:24:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.921 05:24:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.921 05:24:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.921 05:24:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:21.921 05:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.921 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 nvme0n1 00:29:21.921 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.921 05:24:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.921 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.921 05:24:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.921 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.179 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.179 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.179 05:24:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:22.179 05:24:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.179 05:24:59 -- host/auth.sh@44 -- # digest=sha256 00:29:22.179 05:24:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.179 05:24:59 -- host/auth.sh@44 -- # keyid=2 00:29:22.179 05:24:59 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:22.179 05:24:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:22.179 05:24:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:22.179 05:24:59 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:22.179 05:24:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:29:22.179 05:24:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.179 05:24:59 -- host/auth.sh@68 -- # digest=sha256 00:29:22.179 05:24:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:22.179 05:24:59 -- host/auth.sh@68 -- # keyid=2 00:29:22.179 05:24:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.179 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.179 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.179 05:24:59 -- nvmf/common.sh@717 -- # local ip 00:29:22.179 05:24:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.179 05:24:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.179 05:24:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.179 05:24:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.179 05:24:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.179 05:24:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.179 05:24:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.179 05:24:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.179 05:24:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.179 05:24:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:22.179 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.179 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 nvme0n1 00:29:22.179 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.179 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.179 05:24:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:22.179 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.179 05:24:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.179 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.179 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.438 05:24:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:22.438 05:24:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # digest=sha256 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # keyid=3 00:29:22.438 05:24:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:22.438 05:24:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:22.438 05:24:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:22.438 05:24:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:29:22.438 05:24:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # digest=sha256 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # keyid=3 00:29:22.438 05:24:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.438 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.438 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.438 05:24:59 -- nvmf/common.sh@717 -- # local ip 00:29:22.438 05:24:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.438 05:24:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.438 05:24:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.438 05:24:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.438 05:24:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.438 05:24:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.438 05:24:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.438 05:24:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.438 05:24:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.438 05:24:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:22.438 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.438 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 nvme0n1 00:29:22.438 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.438 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.438 05:24:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:22.438 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.438 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.438 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.438 05:24:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.438 05:24:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:22.438 05:24:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # digest=sha256 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@44 -- # keyid=4 00:29:22.438 05:24:59 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:22.438 05:24:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:22.438 05:24:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:22.438 05:24:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:29:22.438 05:24:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # digest=sha256 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:22.438 05:24:59 -- host/auth.sh@68 -- # keyid=4 00:29:22.438 05:24:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.438 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.697 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.698 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.698 05:24:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.698 05:24:59 -- nvmf/common.sh@717 -- # local ip 00:29:22.698 05:24:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.698 05:24:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.698 05:24:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.698 05:24:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.698 05:24:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.698 05:24:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.698 05:24:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.698 05:24:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.698 05:24:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.698 05:24:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.698 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.698 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.698 nvme0n1 00:29:22.698 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.698 05:24:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.698 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.698 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.698 05:24:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:22.698 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.698 05:24:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.698 05:24:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.698 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.698 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.957 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.957 05:24:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.957 05:24:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.957 05:24:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:22.957 05:24:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.957 05:24:59 -- host/auth.sh@44 -- # digest=sha256 00:29:22.957 05:24:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.957 05:24:59 -- host/auth.sh@44 -- # keyid=0 00:29:22.957 05:24:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:22.957 05:24:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:22.957 05:24:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:22.957 05:24:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:22.957 05:24:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:29:22.957 05:24:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.957 05:24:59 -- host/auth.sh@68 -- # digest=sha256 00:29:22.957 05:24:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:22.957 05:24:59 -- host/auth.sh@68 -- # keyid=0 00:29:22.957 05:24:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:22.957 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.957 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.957 05:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.957 05:24:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.957 05:24:59 -- nvmf/common.sh@717 -- # local ip 00:29:22.957 05:24:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.957 05:24:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.957 05:24:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.957 05:24:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.957 05:24:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.957 05:24:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.957 05:24:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.957 05:24:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.957 05:24:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.957 05:24:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:22.957 05:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.957 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:29:23.217 nvme0n1 00:29:23.217 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.217 05:25:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.217 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.217 05:25:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:23.217 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.217 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.217 05:25:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.217 05:25:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.217 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.217 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.217 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.217 05:25:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:23.217 05:25:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:23.217 05:25:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:23.217 05:25:00 -- host/auth.sh@44 -- # digest=sha256 00:29:23.217 05:25:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.217 05:25:00 -- host/auth.sh@44 -- # keyid=1 00:29:23.217 05:25:00 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:23.217 05:25:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:23.217 05:25:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:23.217 05:25:00 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:23.217 05:25:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:29:23.217 05:25:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:23.217 05:25:00 -- host/auth.sh@68 -- # digest=sha256 00:29:23.217 05:25:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:23.217 05:25:00 -- host/auth.sh@68 -- # keyid=1 00:29:23.217 05:25:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:23.217 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.217 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.217 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.217 05:25:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:23.217 05:25:00 -- nvmf/common.sh@717 -- # local ip 00:29:23.217 05:25:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:23.217 05:25:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:23.217 05:25:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.217 05:25:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.217 05:25:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:23.217 05:25:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.217 05:25:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:23.217 05:25:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:23.217 05:25:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:23.217 05:25:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:23.217 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.217 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.476 nvme0n1 00:29:23.476 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.476 05:25:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.476 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.476 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.476 05:25:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:23.476 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.476 05:25:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.476 05:25:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.476 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.476 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.476 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.476 05:25:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:23.476 05:25:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:23.476 05:25:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:23.476 05:25:00 -- host/auth.sh@44 -- # digest=sha256 00:29:23.476 05:25:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.476 05:25:00 -- host/auth.sh@44 -- # keyid=2 00:29:23.476 05:25:00 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:23.476 05:25:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:23.476 05:25:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:23.476 05:25:00 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:23.476 05:25:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:29:23.476 05:25:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:23.476 05:25:00 -- host/auth.sh@68 -- # digest=sha256 00:29:23.476 05:25:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:23.476 05:25:00 -- host/auth.sh@68 -- # keyid=2 00:29:23.476 05:25:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:23.476 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.476 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.476 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.476 05:25:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:23.476 05:25:00 -- nvmf/common.sh@717 -- # local ip 00:29:23.476 05:25:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:23.476 05:25:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:23.476 05:25:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.476 05:25:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.476 05:25:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:23.476 05:25:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.476 05:25:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:23.476 05:25:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:23.476 05:25:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:23.476 05:25:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:23.476 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.476 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.734 nvme0n1 00:29:23.734 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.734 05:25:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.734 05:25:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:23.734 05:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.734 05:25:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.734 05:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.993 05:25:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.993 05:25:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.993 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.993 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:23.993 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.993 05:25:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:23.993 05:25:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:23.993 05:25:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:23.993 05:25:01 -- host/auth.sh@44 -- # digest=sha256 00:29:23.993 05:25:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.993 05:25:01 -- host/auth.sh@44 -- # keyid=3 00:29:23.993 05:25:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:23.993 05:25:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:23.993 05:25:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:23.993 05:25:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:23.993 05:25:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:29:23.993 05:25:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:23.993 05:25:01 -- host/auth.sh@68 -- # digest=sha256 00:29:23.993 05:25:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:23.993 05:25:01 -- host/auth.sh@68 -- # keyid=3 00:29:23.993 05:25:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:23.993 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.993 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:23.993 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.993 05:25:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:23.993 05:25:01 -- nvmf/common.sh@717 -- # local ip 00:29:23.993 05:25:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:23.993 05:25:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:23.993 05:25:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.993 05:25:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.993 05:25:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:23.993 05:25:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.993 05:25:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:23.993 05:25:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:23.993 05:25:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:23.993 05:25:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:23.993 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.993 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 nvme0n1 00:29:24.252 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.252 05:25:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.252 05:25:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:24.252 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.252 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.252 05:25:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.252 05:25:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.252 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.252 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.252 05:25:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:24.253 05:25:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:24.253 05:25:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:24.253 05:25:01 -- host/auth.sh@44 -- # digest=sha256 00:29:24.253 05:25:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:24.253 05:25:01 -- host/auth.sh@44 -- # keyid=4 00:29:24.253 05:25:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:24.253 05:25:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:24.253 05:25:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:24.253 05:25:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:24.253 05:25:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:29:24.253 05:25:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:24.253 05:25:01 -- host/auth.sh@68 -- # digest=sha256 00:29:24.253 05:25:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:24.253 05:25:01 -- host/auth.sh@68 -- # keyid=4 00:29:24.253 05:25:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:24.253 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.253 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.253 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.253 05:25:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:24.253 05:25:01 -- nvmf/common.sh@717 -- # local ip 00:29:24.253 05:25:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:24.253 05:25:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:24.253 05:25:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.253 05:25:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.253 05:25:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:24.253 05:25:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.253 05:25:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:24.253 05:25:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:24.253 05:25:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:24.253 05:25:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.253 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.253 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.512 nvme0n1 00:29:24.512 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.512 05:25:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.512 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.512 05:25:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:24.512 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.512 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.512 05:25:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.512 05:25:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.512 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.512 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.512 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.512 05:25:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.512 05:25:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:24.512 05:25:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:24.512 05:25:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:24.512 05:25:01 -- host/auth.sh@44 -- # digest=sha256 00:29:24.512 05:25:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.512 05:25:01 -- host/auth.sh@44 -- # keyid=0 00:29:24.512 05:25:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:24.512 05:25:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:24.512 05:25:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:24.512 05:25:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:24.512 05:25:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:29:24.512 05:25:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:24.512 05:25:01 -- host/auth.sh@68 -- # digest=sha256 00:29:24.512 05:25:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:24.512 05:25:01 -- host/auth.sh@68 -- # keyid=0 00:29:24.512 05:25:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.512 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.512 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:24.512 05:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.512 05:25:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:24.512 05:25:01 -- nvmf/common.sh@717 -- # local ip 00:29:24.512 05:25:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:24.512 05:25:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:24.512 05:25:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.512 05:25:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.512 05:25:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:24.512 05:25:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.512 05:25:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:24.512 05:25:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:24.512 05:25:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:24.512 05:25:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:24.512 05:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.512 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:29:25.082 nvme0n1 00:29:25.082 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.082 05:25:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.082 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.082 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.082 05:25:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:25.082 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.082 05:25:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.082 05:25:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.082 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.082 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.341 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.341 05:25:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:25.341 05:25:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:25.341 05:25:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:25.341 05:25:02 -- host/auth.sh@44 -- # digest=sha256 00:29:25.341 05:25:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.341 05:25:02 -- host/auth.sh@44 -- # keyid=1 00:29:25.341 05:25:02 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:25.341 05:25:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:25.341 05:25:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:25.341 05:25:02 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:25.341 05:25:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:29:25.341 05:25:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:25.341 05:25:02 -- host/auth.sh@68 -- # digest=sha256 00:29:25.341 05:25:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:25.341 05:25:02 -- host/auth.sh@68 -- # keyid=1 00:29:25.341 05:25:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.341 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.341 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.341 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.341 05:25:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:25.341 05:25:02 -- nvmf/common.sh@717 -- # local ip 00:29:25.341 05:25:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:25.341 05:25:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:25.341 05:25:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.341 05:25:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.341 05:25:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:25.341 05:25:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.341 05:25:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:25.341 05:25:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:25.341 05:25:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:25.341 05:25:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:25.341 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.341 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.906 nvme0n1 00:29:25.906 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.906 05:25:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.906 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.906 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.906 05:25:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:25.906 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.906 05:25:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.906 05:25:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.906 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.906 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.906 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.906 05:25:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:25.906 05:25:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:25.906 05:25:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:25.906 05:25:02 -- host/auth.sh@44 -- # digest=sha256 00:29:25.906 05:25:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.906 05:25:02 -- host/auth.sh@44 -- # keyid=2 00:29:25.906 05:25:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:25.906 05:25:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:25.906 05:25:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:25.906 05:25:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:25.906 05:25:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:29:25.906 05:25:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:25.906 05:25:02 -- host/auth.sh@68 -- # digest=sha256 00:29:25.906 05:25:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:25.906 05:25:02 -- host/auth.sh@68 -- # keyid=2 00:29:25.906 05:25:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.906 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.906 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:25.906 05:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.906 05:25:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:25.906 05:25:02 -- nvmf/common.sh@717 -- # local ip 00:29:25.906 05:25:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:25.906 05:25:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:25.906 05:25:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.906 05:25:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.906 05:25:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:25.906 05:25:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.906 05:25:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:25.906 05:25:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:25.906 05:25:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:25.906 05:25:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:25.906 05:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.906 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:29:26.473 nvme0n1 00:29:26.473 05:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.473 05:25:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.473 05:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.473 05:25:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:26.473 05:25:03 -- common/autotest_common.sh@10 -- # set +x 00:29:26.473 05:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.473 05:25:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.473 05:25:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.473 05:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.473 05:25:03 -- common/autotest_common.sh@10 -- # set +x 00:29:26.473 05:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.473 05:25:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:26.473 05:25:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:26.473 05:25:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:26.473 05:25:03 -- host/auth.sh@44 -- # digest=sha256 00:29:26.473 05:25:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:26.473 05:25:03 -- host/auth.sh@44 -- # keyid=3 00:29:26.473 05:25:03 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:26.473 05:25:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:26.473 05:25:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:26.473 05:25:03 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:26.473 05:25:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:29:26.473 05:25:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:26.473 05:25:03 -- host/auth.sh@68 -- # digest=sha256 00:29:26.473 05:25:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:26.473 05:25:03 -- host/auth.sh@68 -- # keyid=3 00:29:26.473 05:25:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:26.473 05:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.473 05:25:03 -- common/autotest_common.sh@10 -- # set +x 00:29:26.473 05:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.473 05:25:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:26.473 05:25:03 -- nvmf/common.sh@717 -- # local ip 00:29:26.473 05:25:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:26.473 05:25:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:26.473 05:25:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.473 05:25:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.473 05:25:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:26.473 05:25:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.473 05:25:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:26.473 05:25:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:26.473 05:25:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:26.473 05:25:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:26.473 05:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.473 05:25:03 -- common/autotest_common.sh@10 -- # set +x 00:29:27.044 nvme0n1 00:29:27.044 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.044 05:25:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.044 05:25:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:27.044 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.044 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.044 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.044 05:25:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.044 05:25:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.044 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.044 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.044 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.044 05:25:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:27.044 05:25:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:27.044 05:25:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:27.044 05:25:04 -- host/auth.sh@44 -- # digest=sha256 00:29:27.044 05:25:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:27.044 05:25:04 -- host/auth.sh@44 -- # keyid=4 00:29:27.044 05:25:04 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:27.044 05:25:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:27.044 05:25:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:27.044 05:25:04 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:27.044 05:25:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:29:27.044 05:25:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:27.044 05:25:04 -- host/auth.sh@68 -- # digest=sha256 00:29:27.044 05:25:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:27.044 05:25:04 -- host/auth.sh@68 -- # keyid=4 00:29:27.044 05:25:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:27.044 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.044 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.044 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.044 05:25:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:27.044 05:25:04 -- nvmf/common.sh@717 -- # local ip 00:29:27.044 05:25:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:27.044 05:25:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:27.044 05:25:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.044 05:25:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.044 05:25:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:27.044 05:25:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.044 05:25:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:27.044 05:25:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:27.044 05:25:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:27.044 05:25:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.044 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.044 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 nvme0n1 00:29:27.612 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.612 05:25:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.612 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.612 05:25:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:27.612 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.612 05:25:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.612 05:25:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.612 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.612 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.612 05:25:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.612 05:25:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:27.612 05:25:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:27.612 05:25:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:27.612 05:25:04 -- host/auth.sh@44 -- # digest=sha256 00:29:27.612 05:25:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.612 05:25:04 -- host/auth.sh@44 -- # keyid=0 00:29:27.612 05:25:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:27.612 05:25:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:27.612 05:25:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:27.612 05:25:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:27.612 05:25:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:29:27.612 05:25:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:27.612 05:25:04 -- host/auth.sh@68 -- # digest=sha256 00:29:27.612 05:25:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:27.612 05:25:04 -- host/auth.sh@68 -- # keyid=0 00:29:27.612 05:25:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.612 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.612 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 05:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.612 05:25:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:27.612 05:25:04 -- nvmf/common.sh@717 -- # local ip 00:29:27.612 05:25:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:27.612 05:25:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:27.612 05:25:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.612 05:25:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.612 05:25:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:27.612 05:25:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.612 05:25:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:27.612 05:25:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:27.612 05:25:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:27.612 05:25:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:27.612 05:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.612 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:28.545 nvme0n1 00:29:28.545 05:25:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.804 05:25:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:28.804 05:25:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.804 05:25:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.804 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:29:28.804 05:25:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.804 05:25:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.804 05:25:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.804 05:25:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.804 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:29:28.804 05:25:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.804 05:25:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:28.804 05:25:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:28.804 05:25:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:28.804 05:25:05 -- host/auth.sh@44 -- # digest=sha256 00:29:28.804 05:25:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.804 05:25:05 -- host/auth.sh@44 -- # keyid=1 00:29:28.804 05:25:05 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:28.804 05:25:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:28.804 05:25:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:28.804 05:25:05 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:28.804 05:25:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:29:28.804 05:25:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:28.804 05:25:05 -- host/auth.sh@68 -- # digest=sha256 00:29:28.804 05:25:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:28.804 05:25:05 -- host/auth.sh@68 -- # keyid=1 00:29:28.804 05:25:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:28.804 05:25:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.804 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:29:28.804 05:25:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.804 05:25:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:28.804 05:25:05 -- nvmf/common.sh@717 -- # local ip 00:29:28.804 05:25:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:28.804 05:25:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:28.804 05:25:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.804 05:25:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.804 05:25:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:28.804 05:25:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.804 05:25:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:28.804 05:25:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:28.804 05:25:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:28.804 05:25:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:28.804 05:25:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.804 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:29:29.740 nvme0n1 00:29:29.740 05:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.740 05:25:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.740 05:25:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:29.740 05:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.740 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:29:29.740 05:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.740 05:25:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.740 05:25:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.740 05:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.740 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:29:29.740 05:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.740 05:25:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:29.740 05:25:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:29.740 05:25:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:29.740 05:25:06 -- host/auth.sh@44 -- # digest=sha256 00:29:29.740 05:25:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:29.740 05:25:06 -- host/auth.sh@44 -- # keyid=2 00:29:29.740 05:25:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:29.740 05:25:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:29.740 05:25:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:29.740 05:25:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:29.740 05:25:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:29:29.740 05:25:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:29.740 05:25:06 -- host/auth.sh@68 -- # digest=sha256 00:29:29.740 05:25:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:29.740 05:25:06 -- host/auth.sh@68 -- # keyid=2 00:29:29.740 05:25:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:29.740 05:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.740 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:29:29.740 05:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.740 05:25:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:29.740 05:25:06 -- nvmf/common.sh@717 -- # local ip 00:29:29.740 05:25:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:29.740 05:25:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:29.740 05:25:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.740 05:25:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.740 05:25:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:29.740 05:25:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.740 05:25:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:29.740 05:25:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:29.740 05:25:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:29.740 05:25:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:29.740 05:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.740 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:29:30.676 nvme0n1 00:29:30.676 05:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.676 05:25:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.676 05:25:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:30.676 05:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.676 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:30.676 05:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.676 05:25:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.676 05:25:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.676 05:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.676 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:30.676 05:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.676 05:25:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:30.676 05:25:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:30.676 05:25:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:30.676 05:25:07 -- host/auth.sh@44 -- # digest=sha256 00:29:30.676 05:25:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.676 05:25:07 -- host/auth.sh@44 -- # keyid=3 00:29:30.676 05:25:07 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:30.676 05:25:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:30.676 05:25:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:30.676 05:25:07 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:30.676 05:25:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:29:30.676 05:25:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:30.676 05:25:07 -- host/auth.sh@68 -- # digest=sha256 00:29:30.676 05:25:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:30.676 05:25:07 -- host/auth.sh@68 -- # keyid=3 00:29:30.676 05:25:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:30.676 05:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.676 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:30.676 05:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.676 05:25:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:30.676 05:25:07 -- nvmf/common.sh@717 -- # local ip 00:29:30.676 05:25:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:30.676 05:25:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:30.676 05:25:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.676 05:25:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.676 05:25:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:30.676 05:25:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.676 05:25:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:30.676 05:25:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:30.676 05:25:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:30.676 05:25:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:30.676 05:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.676 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:31.642 nvme0n1 00:29:31.642 05:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.642 05:25:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.642 05:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.642 05:25:08 -- common/autotest_common.sh@10 -- # set +x 00:29:31.642 05:25:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:31.642 05:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.642 05:25:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.642 05:25:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.642 05:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.642 05:25:08 -- common/autotest_common.sh@10 -- # set +x 00:29:31.642 05:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.642 05:25:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:31.642 05:25:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:31.642 05:25:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:31.642 05:25:08 -- host/auth.sh@44 -- # digest=sha256 00:29:31.642 05:25:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:31.642 05:25:08 -- host/auth.sh@44 -- # keyid=4 00:29:31.642 05:25:08 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:31.642 05:25:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:31.642 05:25:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:31.642 05:25:08 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:31.643 05:25:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:29:31.643 05:25:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:31.643 05:25:08 -- host/auth.sh@68 -- # digest=sha256 00:29:31.643 05:25:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:31.643 05:25:08 -- host/auth.sh@68 -- # keyid=4 00:29:31.643 05:25:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:31.643 05:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.643 05:25:08 -- common/autotest_common.sh@10 -- # set +x 00:29:31.643 05:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.643 05:25:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:31.643 05:25:08 -- nvmf/common.sh@717 -- # local ip 00:29:31.643 05:25:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:31.643 05:25:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:31.643 05:25:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.643 05:25:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.643 05:25:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:31.643 05:25:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.643 05:25:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:31.643 05:25:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:31.643 05:25:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:31.643 05:25:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.643 05:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.643 05:25:08 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 nvme0n1 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:29:32.578 05:25:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.578 05:25:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:32.578 05:25:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:32.578 05:25:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:32.578 05:25:09 -- host/auth.sh@44 -- # digest=sha384 00:29:32.578 05:25:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.578 05:25:09 -- host/auth.sh@44 -- # keyid=0 00:29:32.578 05:25:09 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:32.578 05:25:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:32.578 05:25:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:32.578 05:25:09 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:32.578 05:25:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:29:32.578 05:25:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:32.578 05:25:09 -- host/auth.sh@68 -- # digest=sha384 00:29:32.578 05:25:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:32.578 05:25:09 -- host/auth.sh@68 -- # keyid=0 00:29:32.578 05:25:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:32.578 05:25:09 -- nvmf/common.sh@717 -- # local ip 00:29:32.578 05:25:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:32.578 05:25:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:32.578 05:25:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.578 05:25:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.578 05:25:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:32.578 05:25:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.578 05:25:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:32.578 05:25:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:32.578 05:25:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:32.578 05:25:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 nvme0n1 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.578 05:25:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.578 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.578 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.578 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.579 05:25:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:32.579 05:25:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:32.579 05:25:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:32.579 05:25:09 -- host/auth.sh@44 -- # digest=sha384 00:29:32.579 05:25:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.579 05:25:09 -- host/auth.sh@44 -- # keyid=1 00:29:32.579 05:25:09 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:32.579 05:25:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:32.579 05:25:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:32.579 05:25:09 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:32.579 05:25:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:29:32.579 05:25:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:32.579 05:25:09 -- host/auth.sh@68 -- # digest=sha384 00:29:32.579 05:25:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:32.579 05:25:09 -- host/auth.sh@68 -- # keyid=1 00:29:32.579 05:25:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.579 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.579 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.579 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.579 05:25:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:32.579 05:25:09 -- nvmf/common.sh@717 -- # local ip 00:29:32.579 05:25:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:32.579 05:25:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:32.579 05:25:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.579 05:25:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.579 05:25:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:32.579 05:25:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.579 05:25:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:32.579 05:25:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:32.579 05:25:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:32.579 05:25:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:32.579 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.579 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.839 nvme0n1 00:29:32.839 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.839 05:25:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.839 05:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.839 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.839 05:25:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:32.839 05:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.839 05:25:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.839 05:25:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.839 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.839 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:32.839 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.839 05:25:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:32.839 05:25:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:32.839 05:25:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:32.839 05:25:10 -- host/auth.sh@44 -- # digest=sha384 00:29:32.839 05:25:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.839 05:25:10 -- host/auth.sh@44 -- # keyid=2 00:29:32.839 05:25:10 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:32.839 05:25:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:32.839 05:25:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:32.839 05:25:10 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:32.839 05:25:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:29:32.839 05:25:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:32.839 05:25:10 -- host/auth.sh@68 -- # digest=sha384 00:29:32.839 05:25:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:32.839 05:25:10 -- host/auth.sh@68 -- # keyid=2 00:29:32.839 05:25:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.839 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.839 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:32.839 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.839 05:25:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:32.839 05:25:10 -- nvmf/common.sh@717 -- # local ip 00:29:32.839 05:25:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:32.839 05:25:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:32.839 05:25:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.839 05:25:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.839 05:25:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:32.839 05:25:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.839 05:25:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:32.839 05:25:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:32.839 05:25:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:32.839 05:25:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:32.839 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.839 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 nvme0n1 00:29:33.099 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.099 05:25:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.099 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.099 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 05:25:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:33.099 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.099 05:25:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.099 05:25:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.099 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.099 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.099 05:25:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:33.099 05:25:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:33.099 05:25:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:33.099 05:25:10 -- host/auth.sh@44 -- # digest=sha384 00:29:33.099 05:25:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:33.099 05:25:10 -- host/auth.sh@44 -- # keyid=3 00:29:33.099 05:25:10 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:33.099 05:25:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:33.099 05:25:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:33.099 05:25:10 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:33.099 05:25:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:29:33.099 05:25:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:33.099 05:25:10 -- host/auth.sh@68 -- # digest=sha384 00:29:33.099 05:25:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:33.099 05:25:10 -- host/auth.sh@68 -- # keyid=3 00:29:33.099 05:25:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:33.099 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.099 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.099 05:25:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:33.099 05:25:10 -- nvmf/common.sh@717 -- # local ip 00:29:33.099 05:25:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:33.099 05:25:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:33.099 05:25:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.099 05:25:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.099 05:25:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:33.099 05:25:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.099 05:25:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:33.099 05:25:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:33.099 05:25:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:33.099 05:25:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:33.099 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.099 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.359 nvme0n1 00:29:33.359 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.359 05:25:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.359 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 05:25:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:33.360 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 05:25:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.360 05:25:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.360 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 05:25:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:33.360 05:25:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:33.360 05:25:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:33.360 05:25:10 -- host/auth.sh@44 -- # digest=sha384 00:29:33.360 05:25:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:33.360 05:25:10 -- host/auth.sh@44 -- # keyid=4 00:29:33.360 05:25:10 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:33.360 05:25:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:33.360 05:25:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:33.360 05:25:10 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:33.360 05:25:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:29:33.360 05:25:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:33.360 05:25:10 -- host/auth.sh@68 -- # digest=sha384 00:29:33.360 05:25:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:33.360 05:25:10 -- host/auth.sh@68 -- # keyid=4 00:29:33.360 05:25:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:33.360 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 05:25:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:33.360 05:25:10 -- nvmf/common.sh@717 -- # local ip 00:29:33.360 05:25:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:33.360 05:25:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:33.360 05:25:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.360 05:25:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.360 05:25:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:33.360 05:25:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.360 05:25:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:33.360 05:25:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:33.360 05:25:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:33.360 05:25:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.360 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 nvme0n1 00:29:33.618 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.619 05:25:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.619 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.619 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.619 05:25:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:33.619 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.619 05:25:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.619 05:25:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.619 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.619 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.619 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.619 05:25:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.619 05:25:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:33.619 05:25:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:33.619 05:25:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:33.619 05:25:10 -- host/auth.sh@44 -- # digest=sha384 00:29:33.619 05:25:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.619 05:25:10 -- host/auth.sh@44 -- # keyid=0 00:29:33.619 05:25:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:33.619 05:25:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:33.619 05:25:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:33.619 05:25:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:33.619 05:25:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:29:33.619 05:25:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:33.619 05:25:10 -- host/auth.sh@68 -- # digest=sha384 00:29:33.619 05:25:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:33.619 05:25:10 -- host/auth.sh@68 -- # keyid=0 00:29:33.619 05:25:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:33.619 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.619 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.619 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.619 05:25:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:33.619 05:25:10 -- nvmf/common.sh@717 -- # local ip 00:29:33.619 05:25:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:33.619 05:25:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:33.619 05:25:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.619 05:25:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.619 05:25:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:33.619 05:25:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.619 05:25:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:33.619 05:25:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:33.619 05:25:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:33.619 05:25:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:33.619 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.619 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.876 nvme0n1 00:29:33.876 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.876 05:25:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.876 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.876 05:25:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:33.876 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.876 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.876 05:25:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.876 05:25:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.876 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.876 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.876 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.876 05:25:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:33.876 05:25:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:33.876 05:25:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:33.876 05:25:10 -- host/auth.sh@44 -- # digest=sha384 00:29:33.876 05:25:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.877 05:25:10 -- host/auth.sh@44 -- # keyid=1 00:29:33.877 05:25:10 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:33.877 05:25:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:33.877 05:25:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:33.877 05:25:10 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:33.877 05:25:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:29:33.877 05:25:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:33.877 05:25:10 -- host/auth.sh@68 -- # digest=sha384 00:29:33.877 05:25:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:33.877 05:25:10 -- host/auth.sh@68 -- # keyid=1 00:29:33.877 05:25:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:33.877 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.877 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:33.877 05:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.877 05:25:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:33.877 05:25:10 -- nvmf/common.sh@717 -- # local ip 00:29:33.877 05:25:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:33.877 05:25:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:33.877 05:25:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.877 05:25:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.877 05:25:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:33.877 05:25:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.877 05:25:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:33.877 05:25:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:33.877 05:25:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:33.877 05:25:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:33.877 05:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.877 05:25:10 -- common/autotest_common.sh@10 -- # set +x 00:29:34.134 nvme0n1 00:29:34.134 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.134 05:25:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.134 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.134 05:25:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:34.134 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.134 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.134 05:25:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.134 05:25:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.134 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.134 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.134 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.134 05:25:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:34.134 05:25:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:34.134 05:25:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:34.134 05:25:11 -- host/auth.sh@44 -- # digest=sha384 00:29:34.134 05:25:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.134 05:25:11 -- host/auth.sh@44 -- # keyid=2 00:29:34.134 05:25:11 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:34.134 05:25:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:34.134 05:25:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:34.134 05:25:11 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:34.134 05:25:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:29:34.134 05:25:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:34.134 05:25:11 -- host/auth.sh@68 -- # digest=sha384 00:29:34.134 05:25:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:34.134 05:25:11 -- host/auth.sh@68 -- # keyid=2 00:29:34.134 05:25:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.134 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.134 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.134 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.134 05:25:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:34.134 05:25:11 -- nvmf/common.sh@717 -- # local ip 00:29:34.134 05:25:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:34.134 05:25:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:34.134 05:25:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.134 05:25:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.134 05:25:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:34.134 05:25:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.134 05:25:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:34.134 05:25:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:34.134 05:25:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:34.134 05:25:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:34.134 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.134 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.392 nvme0n1 00:29:34.393 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.393 05:25:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.393 05:25:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:34.393 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.393 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.393 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.393 05:25:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.393 05:25:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.393 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.393 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.393 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.393 05:25:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:34.393 05:25:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:34.393 05:25:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:34.393 05:25:11 -- host/auth.sh@44 -- # digest=sha384 00:29:34.393 05:25:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.393 05:25:11 -- host/auth.sh@44 -- # keyid=3 00:29:34.393 05:25:11 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:34.393 05:25:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:34.393 05:25:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:34.393 05:25:11 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:34.393 05:25:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:29:34.393 05:25:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:34.393 05:25:11 -- host/auth.sh@68 -- # digest=sha384 00:29:34.393 05:25:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:34.393 05:25:11 -- host/auth.sh@68 -- # keyid=3 00:29:34.393 05:25:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.393 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.393 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.393 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.393 05:25:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:34.393 05:25:11 -- nvmf/common.sh@717 -- # local ip 00:29:34.393 05:25:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:34.393 05:25:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:34.393 05:25:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.393 05:25:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.393 05:25:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:34.393 05:25:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.393 05:25:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:34.393 05:25:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:34.393 05:25:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:34.393 05:25:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:34.393 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.393 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.652 nvme0n1 00:29:34.652 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.652 05:25:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.652 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.652 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.652 05:25:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:34.652 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.652 05:25:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.652 05:25:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.652 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.652 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.652 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.652 05:25:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:34.652 05:25:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:34.652 05:25:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:34.652 05:25:11 -- host/auth.sh@44 -- # digest=sha384 00:29:34.652 05:25:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.652 05:25:11 -- host/auth.sh@44 -- # keyid=4 00:29:34.652 05:25:11 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:34.652 05:25:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:34.652 05:25:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:34.652 05:25:11 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:34.652 05:25:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:29:34.652 05:25:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:34.652 05:25:11 -- host/auth.sh@68 -- # digest=sha384 00:29:34.652 05:25:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:34.652 05:25:11 -- host/auth.sh@68 -- # keyid=4 00:29:34.652 05:25:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.652 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.652 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.652 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.652 05:25:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:34.652 05:25:11 -- nvmf/common.sh@717 -- # local ip 00:29:34.652 05:25:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:34.652 05:25:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:34.652 05:25:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.652 05:25:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.652 05:25:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:34.653 05:25:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.653 05:25:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:34.653 05:25:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:34.653 05:25:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:34.653 05:25:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.653 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.653 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.913 nvme0n1 00:29:34.913 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.913 05:25:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.913 05:25:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:34.913 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.913 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.913 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.913 05:25:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.913 05:25:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.913 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.913 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.913 05:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.913 05:25:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.913 05:25:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:34.913 05:25:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:34.913 05:25:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:34.913 05:25:11 -- host/auth.sh@44 -- # digest=sha384 00:29:34.913 05:25:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.913 05:25:11 -- host/auth.sh@44 -- # keyid=0 00:29:34.913 05:25:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:34.913 05:25:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:34.913 05:25:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:34.913 05:25:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:34.913 05:25:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:29:34.913 05:25:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:34.913 05:25:11 -- host/auth.sh@68 -- # digest=sha384 00:29:34.914 05:25:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:34.914 05:25:11 -- host/auth.sh@68 -- # keyid=0 00:29:34.914 05:25:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:34.914 05:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.914 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:34.914 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.914 05:25:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:34.914 05:25:12 -- nvmf/common.sh@717 -- # local ip 00:29:34.914 05:25:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:34.914 05:25:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:34.914 05:25:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.914 05:25:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.914 05:25:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:34.914 05:25:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.914 05:25:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:34.914 05:25:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:34.914 05:25:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:34.914 05:25:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:34.914 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.914 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.174 nvme0n1 00:29:35.175 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.175 05:25:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.175 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.175 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.175 05:25:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:35.175 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.175 05:25:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.175 05:25:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.175 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.175 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.175 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.175 05:25:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:35.175 05:25:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:35.175 05:25:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:35.175 05:25:12 -- host/auth.sh@44 -- # digest=sha384 00:29:35.175 05:25:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.175 05:25:12 -- host/auth.sh@44 -- # keyid=1 00:29:35.175 05:25:12 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:35.175 05:25:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:35.175 05:25:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:35.175 05:25:12 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:35.175 05:25:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:29:35.175 05:25:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:35.175 05:25:12 -- host/auth.sh@68 -- # digest=sha384 00:29:35.175 05:25:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:35.175 05:25:12 -- host/auth.sh@68 -- # keyid=1 00:29:35.175 05:25:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.175 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.175 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.175 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.175 05:25:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:35.175 05:25:12 -- nvmf/common.sh@717 -- # local ip 00:29:35.175 05:25:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:35.175 05:25:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:35.175 05:25:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.175 05:25:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.175 05:25:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:35.175 05:25:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.175 05:25:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:35.175 05:25:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:35.175 05:25:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:35.175 05:25:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:35.175 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.175 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.434 nvme0n1 00:29:35.434 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.434 05:25:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.434 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.434 05:25:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:35.434 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.434 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.434 05:25:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.434 05:25:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.434 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.434 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.693 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.693 05:25:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:35.693 05:25:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:35.693 05:25:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:35.693 05:25:12 -- host/auth.sh@44 -- # digest=sha384 00:29:35.693 05:25:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.693 05:25:12 -- host/auth.sh@44 -- # keyid=2 00:29:35.693 05:25:12 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:35.693 05:25:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:35.693 05:25:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:35.693 05:25:12 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:35.693 05:25:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:29:35.693 05:25:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:35.693 05:25:12 -- host/auth.sh@68 -- # digest=sha384 00:29:35.693 05:25:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:35.693 05:25:12 -- host/auth.sh@68 -- # keyid=2 00:29:35.693 05:25:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.693 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.693 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.693 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.693 05:25:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:35.693 05:25:12 -- nvmf/common.sh@717 -- # local ip 00:29:35.693 05:25:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:35.693 05:25:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:35.693 05:25:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.693 05:25:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.693 05:25:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:35.693 05:25:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.693 05:25:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:35.693 05:25:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:35.693 05:25:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:35.693 05:25:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:35.694 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.694 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.954 nvme0n1 00:29:35.954 05:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.954 05:25:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.954 05:25:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:35.954 05:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.954 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.954 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.954 05:25:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.954 05:25:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.954 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.954 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.954 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.954 05:25:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:35.954 05:25:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:35.954 05:25:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:35.954 05:25:13 -- host/auth.sh@44 -- # digest=sha384 00:29:35.954 05:25:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.954 05:25:13 -- host/auth.sh@44 -- # keyid=3 00:29:35.954 05:25:13 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:35.954 05:25:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:35.954 05:25:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:35.954 05:25:13 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:35.954 05:25:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:29:35.954 05:25:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:35.954 05:25:13 -- host/auth.sh@68 -- # digest=sha384 00:29:35.954 05:25:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:35.954 05:25:13 -- host/auth.sh@68 -- # keyid=3 00:29:35.954 05:25:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.954 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.954 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.954 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.954 05:25:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:35.954 05:25:13 -- nvmf/common.sh@717 -- # local ip 00:29:35.954 05:25:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:35.954 05:25:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:35.954 05:25:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.954 05:25:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.954 05:25:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:35.954 05:25:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.954 05:25:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:35.954 05:25:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:35.954 05:25:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:35.954 05:25:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:35.954 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.954 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.214 nvme0n1 00:29:36.214 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.214 05:25:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.214 05:25:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:36.214 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.214 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.214 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.214 05:25:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.214 05:25:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.214 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.214 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.214 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.214 05:25:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:36.214 05:25:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:36.214 05:25:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:36.214 05:25:13 -- host/auth.sh@44 -- # digest=sha384 00:29:36.214 05:25:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:36.214 05:25:13 -- host/auth.sh@44 -- # keyid=4 00:29:36.214 05:25:13 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:36.214 05:25:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:36.214 05:25:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:36.214 05:25:13 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:36.214 05:25:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:29:36.214 05:25:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:36.214 05:25:13 -- host/auth.sh@68 -- # digest=sha384 00:29:36.214 05:25:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:36.214 05:25:13 -- host/auth.sh@68 -- # keyid=4 00:29:36.214 05:25:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:36.214 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.214 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.214 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.214 05:25:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:36.214 05:25:13 -- nvmf/common.sh@717 -- # local ip 00:29:36.214 05:25:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:36.214 05:25:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:36.214 05:25:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.214 05:25:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.214 05:25:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:36.214 05:25:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.214 05:25:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:36.214 05:25:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:36.214 05:25:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:36.214 05:25:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.214 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.214 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.472 nvme0n1 00:29:36.472 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.472 05:25:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.472 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.472 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.472 05:25:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:36.472 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.472 05:25:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.472 05:25:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.472 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.472 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.472 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.472 05:25:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.472 05:25:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:36.472 05:25:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:36.472 05:25:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:36.472 05:25:13 -- host/auth.sh@44 -- # digest=sha384 00:29:36.472 05:25:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.472 05:25:13 -- host/auth.sh@44 -- # keyid=0 00:29:36.472 05:25:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:36.472 05:25:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:36.472 05:25:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:36.472 05:25:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:36.472 05:25:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:29:36.472 05:25:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:36.472 05:25:13 -- host/auth.sh@68 -- # digest=sha384 00:29:36.731 05:25:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:36.731 05:25:13 -- host/auth.sh@68 -- # keyid=0 00:29:36.731 05:25:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.731 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.731 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:36.731 05:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.731 05:25:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:36.731 05:25:13 -- nvmf/common.sh@717 -- # local ip 00:29:36.731 05:25:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:36.731 05:25:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:36.731 05:25:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.731 05:25:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.731 05:25:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:36.731 05:25:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.731 05:25:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:36.731 05:25:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:36.731 05:25:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:36.731 05:25:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:36.731 05:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.731 05:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:37.298 nvme0n1 00:29:37.299 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.299 05:25:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.299 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.299 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.299 05:25:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:37.299 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.299 05:25:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.299 05:25:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.299 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.299 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.299 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.299 05:25:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:37.299 05:25:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:37.299 05:25:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:37.299 05:25:14 -- host/auth.sh@44 -- # digest=sha384 00:29:37.299 05:25:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.299 05:25:14 -- host/auth.sh@44 -- # keyid=1 00:29:37.299 05:25:14 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:37.299 05:25:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:37.299 05:25:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:37.299 05:25:14 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:37.299 05:25:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:29:37.299 05:25:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:37.299 05:25:14 -- host/auth.sh@68 -- # digest=sha384 00:29:37.299 05:25:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:37.299 05:25:14 -- host/auth.sh@68 -- # keyid=1 00:29:37.299 05:25:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.299 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.299 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.299 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.299 05:25:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:37.299 05:25:14 -- nvmf/common.sh@717 -- # local ip 00:29:37.299 05:25:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:37.299 05:25:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:37.299 05:25:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.299 05:25:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.299 05:25:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:37.299 05:25:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.299 05:25:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:37.299 05:25:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:37.299 05:25:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:37.299 05:25:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:37.299 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.299 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.866 nvme0n1 00:29:37.866 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.866 05:25:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.866 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.866 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.866 05:25:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:37.866 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.866 05:25:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.866 05:25:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.866 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.866 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.866 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.866 05:25:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:37.866 05:25:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:37.866 05:25:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:37.866 05:25:14 -- host/auth.sh@44 -- # digest=sha384 00:29:37.866 05:25:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.866 05:25:14 -- host/auth.sh@44 -- # keyid=2 00:29:37.866 05:25:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:37.866 05:25:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:37.866 05:25:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:37.866 05:25:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:37.866 05:25:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:29:37.866 05:25:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:37.866 05:25:14 -- host/auth.sh@68 -- # digest=sha384 00:29:37.866 05:25:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:37.866 05:25:14 -- host/auth.sh@68 -- # keyid=2 00:29:37.866 05:25:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.866 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.866 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:37.866 05:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.866 05:25:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:37.866 05:25:14 -- nvmf/common.sh@717 -- # local ip 00:29:37.866 05:25:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:37.866 05:25:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:37.866 05:25:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.866 05:25:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.866 05:25:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:37.866 05:25:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.866 05:25:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:37.866 05:25:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:37.866 05:25:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:37.866 05:25:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:37.866 05:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.866 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:38.433 nvme0n1 00:29:38.433 05:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.433 05:25:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.433 05:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.433 05:25:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:38.433 05:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:38.433 05:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.433 05:25:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.433 05:25:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.433 05:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.433 05:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:38.433 05:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.433 05:25:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:38.433 05:25:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:38.433 05:25:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:38.433 05:25:15 -- host/auth.sh@44 -- # digest=sha384 00:29:38.433 05:25:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:38.433 05:25:15 -- host/auth.sh@44 -- # keyid=3 00:29:38.433 05:25:15 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:38.433 05:25:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:38.433 05:25:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:38.433 05:25:15 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:38.433 05:25:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:29:38.433 05:25:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:38.433 05:25:15 -- host/auth.sh@68 -- # digest=sha384 00:29:38.433 05:25:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:38.433 05:25:15 -- host/auth.sh@68 -- # keyid=3 00:29:38.433 05:25:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:38.433 05:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.433 05:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:38.433 05:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.433 05:25:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:38.433 05:25:15 -- nvmf/common.sh@717 -- # local ip 00:29:38.433 05:25:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:38.433 05:25:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:38.433 05:25:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.433 05:25:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.433 05:25:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:38.433 05:25:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.433 05:25:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:38.433 05:25:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:38.433 05:25:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:38.433 05:25:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:38.433 05:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.433 05:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:39.001 nvme0n1 00:29:39.002 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.002 05:25:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.002 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.002 05:25:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:39.002 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.002 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.002 05:25:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.002 05:25:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.002 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.002 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.002 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.002 05:25:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:39.002 05:25:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:39.002 05:25:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:39.002 05:25:16 -- host/auth.sh@44 -- # digest=sha384 00:29:39.002 05:25:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.002 05:25:16 -- host/auth.sh@44 -- # keyid=4 00:29:39.002 05:25:16 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:39.002 05:25:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:39.002 05:25:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:39.002 05:25:16 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:39.002 05:25:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:29:39.002 05:25:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:39.002 05:25:16 -- host/auth.sh@68 -- # digest=sha384 00:29:39.002 05:25:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:39.002 05:25:16 -- host/auth.sh@68 -- # keyid=4 00:29:39.002 05:25:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:39.002 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.002 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.002 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.002 05:25:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:39.002 05:25:16 -- nvmf/common.sh@717 -- # local ip 00:29:39.002 05:25:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:39.002 05:25:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:39.002 05:25:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.002 05:25:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.002 05:25:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:39.002 05:25:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.002 05:25:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:39.002 05:25:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:39.002 05:25:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:39.002 05:25:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:39.002 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.002 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.568 nvme0n1 00:29:39.568 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.568 05:25:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.568 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.568 05:25:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:39.568 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.568 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.568 05:25:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.568 05:25:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.568 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.568 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.568 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.568 05:25:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.568 05:25:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:39.568 05:25:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:39.568 05:25:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:39.568 05:25:16 -- host/auth.sh@44 -- # digest=sha384 00:29:39.568 05:25:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.568 05:25:16 -- host/auth.sh@44 -- # keyid=0 00:29:39.568 05:25:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:39.568 05:25:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:39.568 05:25:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:39.568 05:25:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:39.568 05:25:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:29:39.568 05:25:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:39.568 05:25:16 -- host/auth.sh@68 -- # digest=sha384 00:29:39.568 05:25:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:39.568 05:25:16 -- host/auth.sh@68 -- # keyid=0 00:29:39.568 05:25:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:39.568 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.568 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.568 05:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.568 05:25:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:39.568 05:25:16 -- nvmf/common.sh@717 -- # local ip 00:29:39.568 05:25:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:39.568 05:25:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:39.568 05:25:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.568 05:25:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.568 05:25:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:39.568 05:25:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.568 05:25:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:39.568 05:25:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:39.568 05:25:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:39.568 05:25:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:39.568 05:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.568 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:40.947 nvme0n1 00:29:40.948 05:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:40.948 05:25:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.948 05:25:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:40.948 05:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.948 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.948 05:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:40.948 05:25:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.948 05:25:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.948 05:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.948 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.948 05:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:40.948 05:25:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:40.948 05:25:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:40.948 05:25:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:40.948 05:25:17 -- host/auth.sh@44 -- # digest=sha384 00:29:40.948 05:25:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.948 05:25:17 -- host/auth.sh@44 -- # keyid=1 00:29:40.948 05:25:17 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:40.948 05:25:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:40.948 05:25:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:40.948 05:25:17 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:40.948 05:25:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:29:40.948 05:25:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:40.948 05:25:17 -- host/auth.sh@68 -- # digest=sha384 00:29:40.948 05:25:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:40.948 05:25:17 -- host/auth.sh@68 -- # keyid=1 00:29:40.948 05:25:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:40.948 05:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.948 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.948 05:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:40.948 05:25:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:40.948 05:25:17 -- nvmf/common.sh@717 -- # local ip 00:29:40.948 05:25:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:40.948 05:25:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:40.948 05:25:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.948 05:25:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.948 05:25:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:40.948 05:25:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.948 05:25:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:40.948 05:25:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:40.948 05:25:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:40.948 05:25:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:40.948 05:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.948 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:41.516 nvme0n1 00:29:41.516 05:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.516 05:25:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.516 05:25:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:41.516 05:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.516 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:41.516 05:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.776 05:25:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.776 05:25:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.776 05:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.776 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:41.776 05:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.776 05:25:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:41.776 05:25:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:41.776 05:25:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:41.776 05:25:18 -- host/auth.sh@44 -- # digest=sha384 00:29:41.776 05:25:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.777 05:25:18 -- host/auth.sh@44 -- # keyid=2 00:29:41.777 05:25:18 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:41.777 05:25:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:41.777 05:25:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:41.777 05:25:18 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:41.777 05:25:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:29:41.777 05:25:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:41.777 05:25:18 -- host/auth.sh@68 -- # digest=sha384 00:29:41.777 05:25:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:41.777 05:25:18 -- host/auth.sh@68 -- # keyid=2 00:29:41.777 05:25:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:41.777 05:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.777 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:41.777 05:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.777 05:25:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:41.777 05:25:18 -- nvmf/common.sh@717 -- # local ip 00:29:41.777 05:25:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:41.777 05:25:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:41.777 05:25:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.777 05:25:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.777 05:25:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:41.777 05:25:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.777 05:25:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:41.777 05:25:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:41.777 05:25:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:41.777 05:25:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:41.777 05:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.777 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:42.715 nvme0n1 00:29:42.715 05:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.715 05:25:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.715 05:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.715 05:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:42.715 05:25:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:42.715 05:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.715 05:25:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.715 05:25:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.715 05:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.715 05:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:42.715 05:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.715 05:25:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:42.715 05:25:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:42.715 05:25:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:42.715 05:25:19 -- host/auth.sh@44 -- # digest=sha384 00:29:42.715 05:25:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.715 05:25:19 -- host/auth.sh@44 -- # keyid=3 00:29:42.715 05:25:19 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:42.715 05:25:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:42.715 05:25:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:42.715 05:25:19 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:42.715 05:25:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:29:42.715 05:25:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:42.715 05:25:19 -- host/auth.sh@68 -- # digest=sha384 00:29:42.715 05:25:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:42.715 05:25:19 -- host/auth.sh@68 -- # keyid=3 00:29:42.715 05:25:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:42.715 05:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.715 05:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:42.715 05:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.715 05:25:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:42.715 05:25:19 -- nvmf/common.sh@717 -- # local ip 00:29:42.715 05:25:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:42.715 05:25:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:42.715 05:25:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.715 05:25:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.715 05:25:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:42.715 05:25:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.716 05:25:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:42.716 05:25:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:42.716 05:25:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:42.716 05:25:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:42.716 05:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.716 05:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:43.653 nvme0n1 00:29:43.653 05:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.653 05:25:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.653 05:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.653 05:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:43.653 05:25:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:43.653 05:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.653 05:25:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.653 05:25:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.653 05:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.653 05:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:43.653 05:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.653 05:25:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:43.653 05:25:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:43.653 05:25:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:43.653 05:25:20 -- host/auth.sh@44 -- # digest=sha384 00:29:43.653 05:25:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.653 05:25:20 -- host/auth.sh@44 -- # keyid=4 00:29:43.653 05:25:20 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:43.653 05:25:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:43.653 05:25:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:43.653 05:25:20 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:43.653 05:25:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:29:43.653 05:25:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:43.653 05:25:20 -- host/auth.sh@68 -- # digest=sha384 00:29:43.653 05:25:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:43.653 05:25:20 -- host/auth.sh@68 -- # keyid=4 00:29:43.653 05:25:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:43.653 05:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.653 05:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:43.653 05:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.653 05:25:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:43.653 05:25:20 -- nvmf/common.sh@717 -- # local ip 00:29:43.653 05:25:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:43.653 05:25:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:43.653 05:25:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.653 05:25:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.653 05:25:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:43.653 05:25:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.653 05:25:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:43.653 05:25:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:43.653 05:25:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:43.653 05:25:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.653 05:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.653 05:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:44.604 nvme0n1 00:29:44.604 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.604 05:25:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.604 05:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.604 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.604 05:25:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:44.604 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.604 05:25:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.605 05:25:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.605 05:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.605 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.605 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.605 05:25:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:29:44.605 05:25:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.605 05:25:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:44.605 05:25:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:44.605 05:25:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:44.605 05:25:21 -- host/auth.sh@44 -- # digest=sha512 00:29:44.605 05:25:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:44.605 05:25:21 -- host/auth.sh@44 -- # keyid=0 00:29:44.605 05:25:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:44.605 05:25:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:44.605 05:25:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:44.605 05:25:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:44.605 05:25:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:29:44.605 05:25:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:44.605 05:25:21 -- host/auth.sh@68 -- # digest=sha512 00:29:44.605 05:25:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:44.605 05:25:21 -- host/auth.sh@68 -- # keyid=0 00:29:44.605 05:25:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:44.605 05:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.605 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.605 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.605 05:25:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:44.605 05:25:21 -- nvmf/common.sh@717 -- # local ip 00:29:44.605 05:25:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:44.605 05:25:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:44.605 05:25:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.605 05:25:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.605 05:25:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:44.605 05:25:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.605 05:25:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:44.605 05:25:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:44.605 05:25:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:44.605 05:25:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:44.605 05:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.605 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.865 nvme0n1 00:29:44.865 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.865 05:25:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.865 05:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.865 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.865 05:25:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:44.865 05:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.865 05:25:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.865 05:25:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.865 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.865 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.865 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.865 05:25:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:44.865 05:25:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:44.865 05:25:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:44.865 05:25:22 -- host/auth.sh@44 -- # digest=sha512 00:29:44.865 05:25:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:44.865 05:25:22 -- host/auth.sh@44 -- # keyid=1 00:29:44.865 05:25:22 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:44.865 05:25:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:44.865 05:25:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:44.865 05:25:22 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:44.865 05:25:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:29:44.865 05:25:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:44.865 05:25:22 -- host/auth.sh@68 -- # digest=sha512 00:29:44.865 05:25:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:44.865 05:25:22 -- host/auth.sh@68 -- # keyid=1 00:29:44.865 05:25:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:44.865 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.865 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.865 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.865 05:25:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:44.865 05:25:22 -- nvmf/common.sh@717 -- # local ip 00:29:44.865 05:25:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:44.865 05:25:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:44.865 05:25:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.865 05:25:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.865 05:25:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:44.865 05:25:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.865 05:25:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:44.865 05:25:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:44.865 05:25:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:44.865 05:25:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:44.865 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.865 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.126 nvme0n1 00:29:45.126 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.126 05:25:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.126 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.126 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.126 05:25:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:45.126 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.126 05:25:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.126 05:25:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.126 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.126 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.126 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.126 05:25:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:45.126 05:25:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:45.126 05:25:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:45.126 05:25:22 -- host/auth.sh@44 -- # digest=sha512 00:29:45.126 05:25:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.126 05:25:22 -- host/auth.sh@44 -- # keyid=2 00:29:45.126 05:25:22 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:45.126 05:25:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:45.126 05:25:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:45.126 05:25:22 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:45.126 05:25:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:29:45.126 05:25:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:45.126 05:25:22 -- host/auth.sh@68 -- # digest=sha512 00:29:45.126 05:25:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:45.126 05:25:22 -- host/auth.sh@68 -- # keyid=2 00:29:45.127 05:25:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:45.127 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.127 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.127 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.127 05:25:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:45.127 05:25:22 -- nvmf/common.sh@717 -- # local ip 00:29:45.127 05:25:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:45.127 05:25:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:45.127 05:25:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.127 05:25:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.127 05:25:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:45.127 05:25:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.127 05:25:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:45.127 05:25:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:45.127 05:25:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:45.127 05:25:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:45.127 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.127 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 nvme0n1 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.422 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.422 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 05:25:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.422 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.422 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:45.422 05:25:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:45.422 05:25:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:45.422 05:25:22 -- host/auth.sh@44 -- # digest=sha512 00:29:45.422 05:25:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.422 05:25:22 -- host/auth.sh@44 -- # keyid=3 00:29:45.422 05:25:22 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:45.422 05:25:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:45.422 05:25:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:45.422 05:25:22 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:45.422 05:25:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:29:45.422 05:25:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:45.422 05:25:22 -- host/auth.sh@68 -- # digest=sha512 00:29:45.422 05:25:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:45.422 05:25:22 -- host/auth.sh@68 -- # keyid=3 00:29:45.422 05:25:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:45.422 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.422 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:45.422 05:25:22 -- nvmf/common.sh@717 -- # local ip 00:29:45.422 05:25:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:45.422 05:25:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:45.422 05:25:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.422 05:25:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.422 05:25:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:45.422 05:25:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.422 05:25:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:45.422 05:25:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:45.422 05:25:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:45.422 05:25:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:45.422 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.422 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 nvme0n1 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.422 05:25:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.422 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.422 05:25:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:45.422 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.422 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.680 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.680 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.680 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:45.680 05:25:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:45.680 05:25:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # digest=sha512 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # keyid=4 00:29:45.680 05:25:22 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:45.680 05:25:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:45.680 05:25:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:45.680 05:25:22 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:45.680 05:25:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:29:45.680 05:25:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:45.680 05:25:22 -- host/auth.sh@68 -- # digest=sha512 00:29:45.680 05:25:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:45.680 05:25:22 -- host/auth.sh@68 -- # keyid=4 00:29:45.680 05:25:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:45.680 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.680 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.680 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:45.680 05:25:22 -- nvmf/common.sh@717 -- # local ip 00:29:45.680 05:25:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:45.680 05:25:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:45.680 05:25:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.680 05:25:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.680 05:25:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:45.680 05:25:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.680 05:25:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:45.680 05:25:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:45.680 05:25:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:45.680 05:25:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.680 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.680 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.680 nvme0n1 00:29:45.680 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.680 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.680 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.680 05:25:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:45.680 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.680 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.680 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.680 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.680 05:25:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.680 05:25:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:45.680 05:25:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:45.680 05:25:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # digest=sha512 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:45.680 05:25:22 -- host/auth.sh@44 -- # keyid=0 00:29:45.680 05:25:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:45.680 05:25:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:45.680 05:25:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:45.680 05:25:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:45.680 05:25:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:29:45.681 05:25:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:45.681 05:25:22 -- host/auth.sh@68 -- # digest=sha512 00:29:45.681 05:25:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:45.681 05:25:22 -- host/auth.sh@68 -- # keyid=0 00:29:45.681 05:25:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:45.681 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.681 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.681 05:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.681 05:25:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:45.681 05:25:22 -- nvmf/common.sh@717 -- # local ip 00:29:45.681 05:25:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:45.681 05:25:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:45.681 05:25:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.681 05:25:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.681 05:25:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:45.681 05:25:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.681 05:25:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:45.681 05:25:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:45.681 05:25:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:45.681 05:25:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:45.681 05:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.681 05:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.938 nvme0n1 00:29:45.938 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.938 05:25:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.938 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.938 05:25:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:45.938 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:45.938 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.938 05:25:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.938 05:25:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.938 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.938 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:45.938 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.938 05:25:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:45.938 05:25:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:45.938 05:25:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:45.938 05:25:23 -- host/auth.sh@44 -- # digest=sha512 00:29:45.938 05:25:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:45.938 05:25:23 -- host/auth.sh@44 -- # keyid=1 00:29:45.938 05:25:23 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:45.938 05:25:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:45.938 05:25:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:45.938 05:25:23 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:45.938 05:25:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:29:45.938 05:25:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:45.938 05:25:23 -- host/auth.sh@68 -- # digest=sha512 00:29:45.938 05:25:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:45.938 05:25:23 -- host/auth.sh@68 -- # keyid=1 00:29:45.938 05:25:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:45.938 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.938 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:45.938 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.938 05:25:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:45.938 05:25:23 -- nvmf/common.sh@717 -- # local ip 00:29:45.938 05:25:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:45.938 05:25:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:45.938 05:25:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.938 05:25:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.938 05:25:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:45.938 05:25:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.938 05:25:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:45.938 05:25:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:45.938 05:25:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:45.938 05:25:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:45.938 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.938 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.197 nvme0n1 00:29:46.197 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.197 05:25:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.197 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.197 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.197 05:25:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:46.197 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.197 05:25:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.197 05:25:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.197 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.197 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.197 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.197 05:25:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:46.197 05:25:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:46.197 05:25:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:46.197 05:25:23 -- host/auth.sh@44 -- # digest=sha512 00:29:46.197 05:25:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:46.197 05:25:23 -- host/auth.sh@44 -- # keyid=2 00:29:46.197 05:25:23 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:46.197 05:25:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:46.197 05:25:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:46.197 05:25:23 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:46.197 05:25:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:29:46.197 05:25:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:46.197 05:25:23 -- host/auth.sh@68 -- # digest=sha512 00:29:46.197 05:25:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:46.197 05:25:23 -- host/auth.sh@68 -- # keyid=2 00:29:46.197 05:25:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:46.197 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.197 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.197 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.197 05:25:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:46.197 05:25:23 -- nvmf/common.sh@717 -- # local ip 00:29:46.197 05:25:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:46.197 05:25:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:46.197 05:25:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.197 05:25:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.197 05:25:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:46.197 05:25:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.197 05:25:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:46.197 05:25:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:46.197 05:25:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:46.197 05:25:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.197 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.197 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.456 nvme0n1 00:29:46.456 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.456 05:25:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.456 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.456 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.456 05:25:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:46.456 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.456 05:25:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.456 05:25:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.456 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.456 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.456 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.456 05:25:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:46.456 05:25:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:46.456 05:25:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:46.456 05:25:23 -- host/auth.sh@44 -- # digest=sha512 00:29:46.456 05:25:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:46.456 05:25:23 -- host/auth.sh@44 -- # keyid=3 00:29:46.456 05:25:23 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:46.456 05:25:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:46.456 05:25:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:46.456 05:25:23 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:46.456 05:25:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:29:46.456 05:25:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:46.456 05:25:23 -- host/auth.sh@68 -- # digest=sha512 00:29:46.456 05:25:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:46.456 05:25:23 -- host/auth.sh@68 -- # keyid=3 00:29:46.456 05:25:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:46.456 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.456 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.456 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.456 05:25:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:46.456 05:25:23 -- nvmf/common.sh@717 -- # local ip 00:29:46.456 05:25:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:46.456 05:25:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:46.456 05:25:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.456 05:25:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.456 05:25:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:46.456 05:25:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.456 05:25:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:46.456 05:25:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:46.456 05:25:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:46.456 05:25:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:46.456 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.456 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.716 nvme0n1 00:29:46.716 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.716 05:25:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.716 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.716 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.716 05:25:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:46.716 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.716 05:25:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.716 05:25:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.716 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.716 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.716 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.716 05:25:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:46.716 05:25:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:46.716 05:25:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:46.716 05:25:23 -- host/auth.sh@44 -- # digest=sha512 00:29:46.716 05:25:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:46.716 05:25:23 -- host/auth.sh@44 -- # keyid=4 00:29:46.716 05:25:23 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:46.716 05:25:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:46.716 05:25:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:46.716 05:25:23 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:46.716 05:25:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:29:46.716 05:25:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:46.716 05:25:23 -- host/auth.sh@68 -- # digest=sha512 00:29:46.716 05:25:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:46.716 05:25:23 -- host/auth.sh@68 -- # keyid=4 00:29:46.716 05:25:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:46.716 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.716 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.716 05:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.975 05:25:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:46.975 05:25:23 -- nvmf/common.sh@717 -- # local ip 00:29:46.975 05:25:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:46.975 05:25:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:46.975 05:25:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.975 05:25:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.975 05:25:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:46.975 05:25:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.975 05:25:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:46.975 05:25:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:46.975 05:25:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:46.975 05:25:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.975 05:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.975 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.975 nvme0n1 00:29:46.975 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.975 05:25:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.976 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.976 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:46.976 05:25:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:46.976 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.976 05:25:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.976 05:25:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.976 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.976 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:46.976 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.976 05:25:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.976 05:25:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:46.976 05:25:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:46.976 05:25:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:46.976 05:25:24 -- host/auth.sh@44 -- # digest=sha512 00:29:46.976 05:25:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.976 05:25:24 -- host/auth.sh@44 -- # keyid=0 00:29:46.976 05:25:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:46.976 05:25:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:46.976 05:25:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:46.976 05:25:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:46.976 05:25:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:29:46.976 05:25:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:46.976 05:25:24 -- host/auth.sh@68 -- # digest=sha512 00:29:46.976 05:25:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:46.976 05:25:24 -- host/auth.sh@68 -- # keyid=0 00:29:46.976 05:25:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:46.976 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.976 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.234 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.234 05:25:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:47.234 05:25:24 -- nvmf/common.sh@717 -- # local ip 00:29:47.234 05:25:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:47.234 05:25:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:47.234 05:25:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.234 05:25:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.234 05:25:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:47.234 05:25:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.234 05:25:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:47.234 05:25:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:47.234 05:25:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:47.234 05:25:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:47.234 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.234 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.493 nvme0n1 00:29:47.493 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.493 05:25:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.493 05:25:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:47.493 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.493 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.493 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.494 05:25:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.494 05:25:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.494 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.494 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.494 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.494 05:25:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:47.494 05:25:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:47.494 05:25:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:47.494 05:25:24 -- host/auth.sh@44 -- # digest=sha512 00:29:47.494 05:25:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:47.494 05:25:24 -- host/auth.sh@44 -- # keyid=1 00:29:47.494 05:25:24 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:47.494 05:25:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:47.494 05:25:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:47.494 05:25:24 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:47.494 05:25:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:29:47.494 05:25:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:47.494 05:25:24 -- host/auth.sh@68 -- # digest=sha512 00:29:47.494 05:25:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:47.494 05:25:24 -- host/auth.sh@68 -- # keyid=1 00:29:47.494 05:25:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:47.494 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.494 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.494 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.494 05:25:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:47.494 05:25:24 -- nvmf/common.sh@717 -- # local ip 00:29:47.494 05:25:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:47.494 05:25:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:47.494 05:25:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.494 05:25:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.494 05:25:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:47.494 05:25:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.494 05:25:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:47.494 05:25:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:47.494 05:25:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:47.494 05:25:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:47.494 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.494 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.754 nvme0n1 00:29:47.754 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.754 05:25:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.754 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.754 05:25:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:47.754 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.754 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.754 05:25:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.754 05:25:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.754 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.754 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.754 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.754 05:25:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:47.754 05:25:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:47.754 05:25:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:47.754 05:25:24 -- host/auth.sh@44 -- # digest=sha512 00:29:47.754 05:25:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:47.754 05:25:24 -- host/auth.sh@44 -- # keyid=2 00:29:47.754 05:25:24 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:47.754 05:25:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:47.754 05:25:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:47.754 05:25:24 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:47.754 05:25:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:29:47.754 05:25:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:47.754 05:25:24 -- host/auth.sh@68 -- # digest=sha512 00:29:47.754 05:25:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:47.754 05:25:24 -- host/auth.sh@68 -- # keyid=2 00:29:47.754 05:25:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:47.754 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.754 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.754 05:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.754 05:25:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:47.754 05:25:24 -- nvmf/common.sh@717 -- # local ip 00:29:47.755 05:25:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:47.755 05:25:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:47.755 05:25:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.755 05:25:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.755 05:25:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:47.755 05:25:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.755 05:25:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:47.755 05:25:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:47.755 05:25:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:47.755 05:25:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:47.755 05:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.755 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:29:48.015 nvme0n1 00:29:48.015 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.015 05:25:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.015 05:25:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:48.015 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.015 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.015 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.015 05:25:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.015 05:25:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.015 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.015 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.015 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.015 05:25:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:48.015 05:25:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:48.015 05:25:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:48.015 05:25:25 -- host/auth.sh@44 -- # digest=sha512 00:29:48.015 05:25:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.015 05:25:25 -- host/auth.sh@44 -- # keyid=3 00:29:48.015 05:25:25 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:48.015 05:25:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:48.015 05:25:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:48.015 05:25:25 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:48.015 05:25:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:29:48.015 05:25:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:48.015 05:25:25 -- host/auth.sh@68 -- # digest=sha512 00:29:48.015 05:25:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:48.015 05:25:25 -- host/auth.sh@68 -- # keyid=3 00:29:48.015 05:25:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:48.015 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.015 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.015 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.015 05:25:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:48.015 05:25:25 -- nvmf/common.sh@717 -- # local ip 00:29:48.015 05:25:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:48.015 05:25:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:48.015 05:25:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.015 05:25:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.015 05:25:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:48.015 05:25:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.015 05:25:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:48.015 05:25:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:48.015 05:25:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:48.015 05:25:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:48.015 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.015 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.585 nvme0n1 00:29:48.585 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.585 05:25:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.585 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.585 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.585 05:25:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:48.585 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.585 05:25:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.585 05:25:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.585 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.585 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.585 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.585 05:25:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:48.585 05:25:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:48.585 05:25:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:48.585 05:25:25 -- host/auth.sh@44 -- # digest=sha512 00:29:48.585 05:25:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.585 05:25:25 -- host/auth.sh@44 -- # keyid=4 00:29:48.585 05:25:25 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:48.585 05:25:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:48.585 05:25:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:48.585 05:25:25 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:48.585 05:25:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:29:48.585 05:25:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:48.585 05:25:25 -- host/auth.sh@68 -- # digest=sha512 00:29:48.585 05:25:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:48.585 05:25:25 -- host/auth.sh@68 -- # keyid=4 00:29:48.585 05:25:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:48.585 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.585 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.585 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.585 05:25:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:48.585 05:25:25 -- nvmf/common.sh@717 -- # local ip 00:29:48.585 05:25:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:48.585 05:25:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:48.585 05:25:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.585 05:25:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.585 05:25:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:48.585 05:25:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.585 05:25:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:48.585 05:25:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:48.585 05:25:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:48.585 05:25:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.585 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.585 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.845 nvme0n1 00:29:48.845 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.845 05:25:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.845 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.845 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.845 05:25:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:48.845 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.845 05:25:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.845 05:25:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.845 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.845 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.845 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.845 05:25:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.845 05:25:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:48.845 05:25:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:48.845 05:25:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:48.845 05:25:25 -- host/auth.sh@44 -- # digest=sha512 00:29:48.845 05:25:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.845 05:25:25 -- host/auth.sh@44 -- # keyid=0 00:29:48.845 05:25:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:48.845 05:25:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:48.845 05:25:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:48.845 05:25:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:48.845 05:25:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:29:48.845 05:25:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:48.845 05:25:25 -- host/auth.sh@68 -- # digest=sha512 00:29:48.845 05:25:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:48.845 05:25:25 -- host/auth.sh@68 -- # keyid=0 00:29:48.845 05:25:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:48.845 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.845 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.845 05:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.845 05:25:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:48.845 05:25:25 -- nvmf/common.sh@717 -- # local ip 00:29:48.845 05:25:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:48.845 05:25:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:48.845 05:25:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.845 05:25:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.845 05:25:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:48.845 05:25:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.845 05:25:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:48.845 05:25:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:48.845 05:25:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:48.845 05:25:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:48.845 05:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.845 05:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:49.410 nvme0n1 00:29:49.410 05:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.410 05:25:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.410 05:25:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.410 05:25:26 -- common/autotest_common.sh@10 -- # set +x 00:29:49.410 05:25:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:49.410 05:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.410 05:25:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.410 05:25:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.410 05:25:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.410 05:25:26 -- common/autotest_common.sh@10 -- # set +x 00:29:49.410 05:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.410 05:25:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:49.410 05:25:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:49.410 05:25:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:49.410 05:25:26 -- host/auth.sh@44 -- # digest=sha512 00:29:49.410 05:25:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.410 05:25:26 -- host/auth.sh@44 -- # keyid=1 00:29:49.410 05:25:26 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:49.410 05:25:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:49.410 05:25:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:49.410 05:25:26 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:49.410 05:25:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:29:49.410 05:25:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:49.410 05:25:26 -- host/auth.sh@68 -- # digest=sha512 00:29:49.410 05:25:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:49.410 05:25:26 -- host/auth.sh@68 -- # keyid=1 00:29:49.410 05:25:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:49.410 05:25:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.410 05:25:26 -- common/autotest_common.sh@10 -- # set +x 00:29:49.410 05:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.410 05:25:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:49.410 05:25:26 -- nvmf/common.sh@717 -- # local ip 00:29:49.410 05:25:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:49.410 05:25:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:49.410 05:25:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.410 05:25:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.410 05:25:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:49.410 05:25:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.410 05:25:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:49.410 05:25:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:49.410 05:25:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:49.410 05:25:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:49.410 05:25:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.410 05:25:26 -- common/autotest_common.sh@10 -- # set +x 00:29:49.977 nvme0n1 00:29:49.977 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.977 05:25:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.977 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.977 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:49.977 05:25:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:49.977 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.977 05:25:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.977 05:25:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.977 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.977 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:49.977 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.977 05:25:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:49.977 05:25:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:49.977 05:25:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:49.977 05:25:27 -- host/auth.sh@44 -- # digest=sha512 00:29:49.977 05:25:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.977 05:25:27 -- host/auth.sh@44 -- # keyid=2 00:29:49.977 05:25:27 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:49.977 05:25:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:49.977 05:25:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:49.977 05:25:27 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:49.977 05:25:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:29:49.977 05:25:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:49.977 05:25:27 -- host/auth.sh@68 -- # digest=sha512 00:29:49.977 05:25:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:49.977 05:25:27 -- host/auth.sh@68 -- # keyid=2 00:29:49.977 05:25:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:49.977 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.977 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:49.977 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.977 05:25:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:49.977 05:25:27 -- nvmf/common.sh@717 -- # local ip 00:29:49.977 05:25:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:49.977 05:25:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:49.977 05:25:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.977 05:25:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.977 05:25:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:49.977 05:25:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.977 05:25:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:49.977 05:25:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:49.977 05:25:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:49.977 05:25:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:49.977 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.978 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:50.546 nvme0n1 00:29:50.546 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.546 05:25:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.546 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.546 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:50.546 05:25:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:50.546 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.546 05:25:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.546 05:25:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.546 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.546 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:50.546 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.546 05:25:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:50.546 05:25:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:50.546 05:25:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:50.546 05:25:27 -- host/auth.sh@44 -- # digest=sha512 00:29:50.546 05:25:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:50.546 05:25:27 -- host/auth.sh@44 -- # keyid=3 00:29:50.546 05:25:27 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:50.546 05:25:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:50.546 05:25:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:50.546 05:25:27 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:50.546 05:25:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:29:50.546 05:25:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:50.546 05:25:27 -- host/auth.sh@68 -- # digest=sha512 00:29:50.546 05:25:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:50.546 05:25:27 -- host/auth.sh@68 -- # keyid=3 00:29:50.546 05:25:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:50.546 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.546 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:50.805 05:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.805 05:25:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:50.805 05:25:27 -- nvmf/common.sh@717 -- # local ip 00:29:50.805 05:25:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:50.805 05:25:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:50.805 05:25:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.805 05:25:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.805 05:25:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:50.805 05:25:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.805 05:25:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:50.805 05:25:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:50.805 05:25:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:50.805 05:25:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:50.805 05:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.805 05:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:51.374 nvme0n1 00:29:51.374 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.374 05:25:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.374 05:25:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:51.374 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.374 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.374 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.374 05:25:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.374 05:25:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.374 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.375 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.375 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.375 05:25:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:51.375 05:25:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:51.375 05:25:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:51.375 05:25:28 -- host/auth.sh@44 -- # digest=sha512 00:29:51.375 05:25:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:51.375 05:25:28 -- host/auth.sh@44 -- # keyid=4 00:29:51.375 05:25:28 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:51.375 05:25:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:51.375 05:25:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:51.375 05:25:28 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:51.375 05:25:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:29:51.375 05:25:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:51.375 05:25:28 -- host/auth.sh@68 -- # digest=sha512 00:29:51.375 05:25:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:51.375 05:25:28 -- host/auth.sh@68 -- # keyid=4 00:29:51.375 05:25:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:51.375 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.375 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.375 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.375 05:25:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:51.375 05:25:28 -- nvmf/common.sh@717 -- # local ip 00:29:51.375 05:25:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:51.375 05:25:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:51.375 05:25:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.375 05:25:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.375 05:25:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:51.375 05:25:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.375 05:25:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:51.375 05:25:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:51.375 05:25:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:51.375 05:25:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:51.375 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.375 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 nvme0n1 00:29:51.942 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.942 05:25:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.942 05:25:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:51.942 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.942 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.942 05:25:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.942 05:25:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.942 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.942 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.942 05:25:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:51.942 05:25:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:51.942 05:25:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:51.942 05:25:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:51.942 05:25:28 -- host/auth.sh@44 -- # digest=sha512 00:29:51.942 05:25:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.942 05:25:28 -- host/auth.sh@44 -- # keyid=0 00:29:51.942 05:25:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:51.942 05:25:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:51.942 05:25:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:51.942 05:25:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NzBhMjc1MWUxZGM0NTNiOWU0OWVlMDFkMjgzZmU2M2JE6qme: 00:29:51.942 05:25:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:29:51.942 05:25:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:51.942 05:25:28 -- host/auth.sh@68 -- # digest=sha512 00:29:51.942 05:25:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:51.942 05:25:28 -- host/auth.sh@68 -- # keyid=0 00:29:51.942 05:25:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:51.942 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.942 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 05:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.942 05:25:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:51.942 05:25:28 -- nvmf/common.sh@717 -- # local ip 00:29:51.942 05:25:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:51.942 05:25:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:51.942 05:25:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.942 05:25:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.942 05:25:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:51.942 05:25:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.942 05:25:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:51.942 05:25:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:51.942 05:25:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:51.942 05:25:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:51.942 05:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.942 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:29:52.881 nvme0n1 00:29:52.881 05:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.881 05:25:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.881 05:25:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:52.881 05:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.881 05:25:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.881 05:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.881 05:25:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.881 05:25:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.881 05:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.881 05:25:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.881 05:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.881 05:25:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:52.881 05:25:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:52.881 05:25:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:52.881 05:25:29 -- host/auth.sh@44 -- # digest=sha512 00:29:52.881 05:25:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.881 05:25:29 -- host/auth.sh@44 -- # keyid=1 00:29:52.881 05:25:29 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:52.881 05:25:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:52.881 05:25:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:52.881 05:25:29 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:52.881 05:25:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:29:52.881 05:25:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:52.881 05:25:29 -- host/auth.sh@68 -- # digest=sha512 00:29:52.881 05:25:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:52.881 05:25:29 -- host/auth.sh@68 -- # keyid=1 00:29:52.881 05:25:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:52.881 05:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.881 05:25:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.881 05:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.881 05:25:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:52.881 05:25:30 -- nvmf/common.sh@717 -- # local ip 00:29:52.881 05:25:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:52.881 05:25:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:52.881 05:25:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.881 05:25:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.881 05:25:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:52.881 05:25:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.881 05:25:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:52.881 05:25:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:52.881 05:25:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:52.881 05:25:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:52.881 05:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.881 05:25:30 -- common/autotest_common.sh@10 -- # set +x 00:29:53.816 nvme0n1 00:29:53.816 05:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.816 05:25:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.816 05:25:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:53.816 05:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.816 05:25:30 -- common/autotest_common.sh@10 -- # set +x 00:29:53.816 05:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.816 05:25:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.816 05:25:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.816 05:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.816 05:25:31 -- common/autotest_common.sh@10 -- # set +x 00:29:53.816 05:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.816 05:25:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:53.816 05:25:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:53.816 05:25:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:53.816 05:25:31 -- host/auth.sh@44 -- # digest=sha512 00:29:53.816 05:25:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.816 05:25:31 -- host/auth.sh@44 -- # keyid=2 00:29:53.816 05:25:31 -- host/auth.sh@45 -- # key=DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:53.816 05:25:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:53.816 05:25:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:53.816 05:25:31 -- host/auth.sh@49 -- # echo DHHC-1:01:NGI5NjEzNmI4MjdlZDRmOGQxM2Y0YWNlNWUzMjEwMDGnQTeK: 00:29:53.816 05:25:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:29:53.816 05:25:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:53.816 05:25:31 -- host/auth.sh@68 -- # digest=sha512 00:29:53.816 05:25:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:53.816 05:25:31 -- host/auth.sh@68 -- # keyid=2 00:29:53.816 05:25:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:53.816 05:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.816 05:25:31 -- common/autotest_common.sh@10 -- # set +x 00:29:53.816 05:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.816 05:25:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:53.816 05:25:31 -- nvmf/common.sh@717 -- # local ip 00:29:53.816 05:25:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:53.816 05:25:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:53.816 05:25:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.816 05:25:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.816 05:25:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:53.816 05:25:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.816 05:25:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:53.816 05:25:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:53.816 05:25:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:53.816 05:25:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:53.816 05:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.816 05:25:31 -- common/autotest_common.sh@10 -- # set +x 00:29:54.754 nvme0n1 00:29:54.754 05:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.754 05:25:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.754 05:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.754 05:25:32 -- common/autotest_common.sh@10 -- # set +x 00:29:54.754 05:25:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:55.013 05:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.013 05:25:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.013 05:25:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.013 05:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.013 05:25:32 -- common/autotest_common.sh@10 -- # set +x 00:29:55.013 05:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.013 05:25:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:55.013 05:25:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:55.013 05:25:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:55.013 05:25:32 -- host/auth.sh@44 -- # digest=sha512 00:29:55.013 05:25:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:55.013 05:25:32 -- host/auth.sh@44 -- # keyid=3 00:29:55.013 05:25:32 -- host/auth.sh@45 -- # key=DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:55.013 05:25:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:55.013 05:25:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:55.013 05:25:32 -- host/auth.sh@49 -- # echo DHHC-1:02:NDM0ZGIwODE1NzRmNDZiYjBkNjgwYmMzMzY3MWMwZTk2M2YzYzAzOTM3Y2NmOTVmKgVEjg==: 00:29:55.013 05:25:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:29:55.013 05:25:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:55.013 05:25:32 -- host/auth.sh@68 -- # digest=sha512 00:29:55.013 05:25:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:55.013 05:25:32 -- host/auth.sh@68 -- # keyid=3 00:29:55.013 05:25:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:55.013 05:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.013 05:25:32 -- common/autotest_common.sh@10 -- # set +x 00:29:55.013 05:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.013 05:25:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:55.013 05:25:32 -- nvmf/common.sh@717 -- # local ip 00:29:55.013 05:25:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:55.013 05:25:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:55.013 05:25:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.013 05:25:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.013 05:25:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:55.013 05:25:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.013 05:25:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:55.013 05:25:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:55.013 05:25:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:55.013 05:25:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:55.013 05:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.013 05:25:32 -- common/autotest_common.sh@10 -- # set +x 00:29:55.947 nvme0n1 00:29:55.947 05:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.947 05:25:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.947 05:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.947 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:29:55.947 05:25:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:55.947 05:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.947 05:25:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.947 05:25:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.947 05:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.947 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:29:55.947 05:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.947 05:25:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:55.947 05:25:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:55.947 05:25:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:55.947 05:25:33 -- host/auth.sh@44 -- # digest=sha512 00:29:55.947 05:25:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:55.947 05:25:33 -- host/auth.sh@44 -- # keyid=4 00:29:55.947 05:25:33 -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:55.947 05:25:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:55.947 05:25:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:55.947 05:25:33 -- host/auth.sh@49 -- # echo DHHC-1:03:MDQxZDliNDE3NDAyNTQxOWQ0MDUzYzc3MDg3Y2I1MzE2NjE1ZDI2MjU3ZmQzNzgzM2VmNDBlYzRiNmQ2ZTIwZfSZwh0=: 00:29:55.947 05:25:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:29:55.947 05:25:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:55.947 05:25:33 -- host/auth.sh@68 -- # digest=sha512 00:29:55.947 05:25:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:55.947 05:25:33 -- host/auth.sh@68 -- # keyid=4 00:29:55.947 05:25:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:55.947 05:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.947 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:29:55.947 05:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.947 05:25:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:55.947 05:25:33 -- nvmf/common.sh@717 -- # local ip 00:29:55.947 05:25:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:55.947 05:25:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:55.947 05:25:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.947 05:25:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.947 05:25:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:55.947 05:25:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.947 05:25:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:55.947 05:25:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:55.947 05:25:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:55.947 05:25:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.947 05:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.947 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:29:56.883 nvme0n1 00:29:56.883 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.883 05:25:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.883 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.883 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.883 05:25:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:56.883 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.883 05:25:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.883 05:25:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.883 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.883 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.883 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.883 05:25:34 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:56.883 05:25:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:56.883 05:25:34 -- host/auth.sh@44 -- # digest=sha256 00:29:56.883 05:25:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:56.883 05:25:34 -- host/auth.sh@44 -- # keyid=1 00:29:56.883 05:25:34 -- host/auth.sh@45 -- # key=DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:56.883 05:25:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:56.883 05:25:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:56.883 05:25:34 -- host/auth.sh@49 -- # echo DHHC-1:00:M2M3MjM0NGVhMDE4ZDJhNWEwNTJmZTA4NGYwMmE3YWEyOWJhMjg1ZWU0YTI0ZWMzw/BoVA==: 00:29:56.883 05:25:34 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:56.883 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.883 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.883 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.883 05:25:34 -- host/auth.sh@119 -- # get_main_ns_ip 00:29:56.883 05:25:34 -- nvmf/common.sh@717 -- # local ip 00:29:56.883 05:25:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:56.883 05:25:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:56.883 05:25:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.883 05:25:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.883 05:25:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:56.883 05:25:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.883 05:25:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:56.883 05:25:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:56.883 05:25:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:56.883 05:25:34 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.883 05:25:34 -- common/autotest_common.sh@638 -- # local es=0 00:29:56.883 05:25:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.883 05:25:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:56.883 05:25:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:56.883 05:25:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:56.883 05:25:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:56.883 05:25:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.883 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.883 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.883 request: 00:29:56.883 { 00:29:56.883 "name": "nvme0", 00:29:56.883 "trtype": "tcp", 00:29:56.883 "traddr": "10.0.0.1", 00:29:56.883 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:56.883 "adrfam": "ipv4", 00:29:56.883 "trsvcid": "4420", 00:29:56.883 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:56.883 "method": "bdev_nvme_attach_controller", 00:29:56.883 "req_id": 1 00:29:56.883 } 00:29:56.883 Got JSON-RPC error response 00:29:56.883 response: 00:29:56.883 { 00:29:56.883 "code": -32602, 00:29:56.883 "message": "Invalid parameters" 00:29:56.883 } 00:29:56.883 05:25:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:56.883 05:25:34 -- common/autotest_common.sh@641 -- # es=1 00:29:56.883 05:25:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:56.884 05:25:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:56.884 05:25:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:56.884 05:25:34 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.884 05:25:34 -- host/auth.sh@121 -- # jq length 00:29:56.884 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.884 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.884 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.142 05:25:34 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:29:57.142 05:25:34 -- host/auth.sh@124 -- # get_main_ns_ip 00:29:57.142 05:25:34 -- nvmf/common.sh@717 -- # local ip 00:29:57.142 05:25:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:57.142 05:25:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:57.142 05:25:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.142 05:25:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.142 05:25:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:57.142 05:25:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.142 05:25:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:57.142 05:25:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:57.142 05:25:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:57.142 05:25:34 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:57.142 05:25:34 -- common/autotest_common.sh@638 -- # local es=0 00:29:57.142 05:25:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:57.142 05:25:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:57.142 05:25:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:57.142 05:25:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:57.142 05:25:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:57.142 05:25:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:57.142 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.142 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:57.142 request: 00:29:57.142 { 00:29:57.142 "name": "nvme0", 00:29:57.142 "trtype": "tcp", 00:29:57.142 "traddr": "10.0.0.1", 00:29:57.142 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:57.142 "adrfam": "ipv4", 00:29:57.142 "trsvcid": "4420", 00:29:57.142 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:57.142 "dhchap_key": "key2", 00:29:57.142 "method": "bdev_nvme_attach_controller", 00:29:57.142 "req_id": 1 00:29:57.142 } 00:29:57.142 Got JSON-RPC error response 00:29:57.142 response: 00:29:57.142 { 00:29:57.142 "code": -32602, 00:29:57.142 "message": "Invalid parameters" 00:29:57.142 } 00:29:57.142 05:25:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:57.142 05:25:34 -- common/autotest_common.sh@641 -- # es=1 00:29:57.142 05:25:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:57.142 05:25:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:57.142 05:25:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:57.142 05:25:34 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.142 05:25:34 -- host/auth.sh@127 -- # jq length 00:29:57.142 05:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.142 05:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:57.142 05:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.142 05:25:34 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:29:57.142 05:25:34 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:57.142 05:25:34 -- host/auth.sh@130 -- # cleanup 00:29:57.142 05:25:34 -- host/auth.sh@24 -- # nvmftestfini 00:29:57.142 05:25:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:57.142 05:25:34 -- nvmf/common.sh@117 -- # sync 00:29:57.142 05:25:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.142 05:25:34 -- nvmf/common.sh@120 -- # set +e 00:29:57.142 05:25:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.142 05:25:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.142 rmmod nvme_tcp 00:29:57.142 rmmod nvme_fabrics 00:29:57.142 05:25:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.142 05:25:34 -- nvmf/common.sh@124 -- # set -e 00:29:57.142 05:25:34 -- nvmf/common.sh@125 -- # return 0 00:29:57.142 05:25:34 -- nvmf/common.sh@478 -- # '[' -n 2002231 ']' 00:29:57.142 05:25:34 -- nvmf/common.sh@479 -- # killprocess 2002231 00:29:57.142 05:25:34 -- common/autotest_common.sh@936 -- # '[' -z 2002231 ']' 00:29:57.142 05:25:34 -- common/autotest_common.sh@940 -- # kill -0 2002231 00:29:57.142 05:25:34 -- common/autotest_common.sh@941 -- # uname 00:29:57.142 05:25:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:57.142 05:25:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2002231 00:29:57.142 05:25:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:57.142 05:25:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:57.142 05:25:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2002231' 00:29:57.142 killing process with pid 2002231 00:29:57.142 05:25:34 -- common/autotest_common.sh@955 -- # kill 2002231 00:29:57.142 05:25:34 -- common/autotest_common.sh@960 -- # wait 2002231 00:29:57.402 05:25:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:57.402 05:25:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:57.402 05:25:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:57.402 05:25:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.402 05:25:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.402 05:25:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.402 05:25:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.402 05:25:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.960 05:25:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:59.960 05:25:36 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:59.960 05:25:36 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:59.960 05:25:36 -- host/auth.sh@27 -- # clean_kernel_target 00:29:59.960 05:25:36 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:59.960 05:25:36 -- nvmf/common.sh@675 -- # echo 0 00:29:59.960 05:25:36 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:59.960 05:25:36 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:59.960 05:25:36 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:59.960 05:25:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:59.960 05:25:36 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:59.960 05:25:36 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:59.960 05:25:36 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:00.527 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:00.785 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:00.785 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:01.719 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:30:01.719 05:25:38 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qT0 /tmp/spdk.key-null.FX4 /tmp/spdk.key-sha256.G31 /tmp/spdk.key-sha384.vK3 /tmp/spdk.key-sha512.72M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:01.719 05:25:38 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:03.095 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:03.095 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:03.095 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:03.095 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:03.095 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:03.095 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:03.095 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:03.095 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:03.095 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:03.095 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:03.095 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:03.095 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:03.095 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:03.095 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:03.095 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:03.095 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:03.095 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:03.095 00:30:03.095 real 0m48.902s 00:30:03.095 user 0m46.715s 00:30:03.095 sys 0m5.614s 00:30:03.095 05:25:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:03.095 05:25:40 -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 ************************************ 00:30:03.095 END TEST nvmf_auth 00:30:03.095 ************************************ 00:30:03.095 05:25:40 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:30:03.095 05:25:40 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:03.095 05:25:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:03.095 05:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:03.095 05:25:40 -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 ************************************ 00:30:03.095 START TEST nvmf_digest 00:30:03.095 ************************************ 00:30:03.095 05:25:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:03.095 * Looking for test storage... 00:30:03.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.095 05:25:40 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.354 05:25:40 -- nvmf/common.sh@7 -- # uname -s 00:30:03.354 05:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.354 05:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.354 05:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.354 05:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.354 05:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.354 05:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.354 05:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.354 05:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.354 05:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.354 05:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.354 05:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.354 05:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.354 05:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.354 05:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.354 05:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.354 05:25:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.354 05:25:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.354 05:25:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.354 05:25:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.354 05:25:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.354 05:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.354 05:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.354 05:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.354 05:25:40 -- paths/export.sh@5 -- # export PATH 00:30:03.354 05:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.354 05:25:40 -- nvmf/common.sh@47 -- # : 0 00:30:03.354 05:25:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:03.354 05:25:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:03.354 05:25:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.354 05:25:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.354 05:25:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.354 05:25:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:03.354 05:25:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:03.354 05:25:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:03.354 05:25:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:03.354 05:25:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:03.354 05:25:40 -- host/digest.sh@16 -- # runtime=2 00:30:03.354 05:25:40 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:03.354 05:25:40 -- host/digest.sh@138 -- # nvmftestinit 00:30:03.354 05:25:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:03.354 05:25:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.354 05:25:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:03.354 05:25:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:03.354 05:25:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:03.354 05:25:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.354 05:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.354 05:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.354 05:25:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:03.354 05:25:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:03.354 05:25:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:03.354 05:25:40 -- common/autotest_common.sh@10 -- # set +x 00:30:05.262 05:25:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:05.262 05:25:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.262 05:25:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.262 05:25:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.262 05:25:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.262 05:25:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.262 05:25:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.262 05:25:42 -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.262 05:25:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.262 05:25:42 -- nvmf/common.sh@296 -- # e810=() 00:30:05.262 05:25:42 -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.262 05:25:42 -- nvmf/common.sh@297 -- # x722=() 00:30:05.262 05:25:42 -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.262 05:25:42 -- nvmf/common.sh@298 -- # mlx=() 00:30:05.262 05:25:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.262 05:25:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.262 05:25:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.262 05:25:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.262 05:25:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.262 05:25:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.262 05:25:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.262 05:25:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.262 05:25:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.262 05:25:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.262 05:25:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.262 05:25:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.262 05:25:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.262 05:25:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:05.262 05:25:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.262 05:25:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.262 05:25:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.262 05:25:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.263 05:25:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.263 05:25:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:05.263 05:25:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.263 05:25:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.263 05:25:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.263 05:25:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:05.263 05:25:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:05.263 05:25:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:05.263 05:25:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:05.263 05:25:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:05.263 05:25:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.263 05:25:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.263 05:25:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.263 05:25:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.263 05:25:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.263 05:25:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.263 05:25:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.263 05:25:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.263 05:25:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.263 05:25:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.263 05:25:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.263 05:25:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.263 05:25:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.263 05:25:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.263 05:25:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.263 05:25:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.263 05:25:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.263 05:25:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.263 05:25:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.263 05:25:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:30:05.263 00:30:05.263 --- 10.0.0.2 ping statistics --- 00:30:05.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.263 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:30:05.263 05:25:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:30:05.263 00:30:05.263 --- 10.0.0.1 ping statistics --- 00:30:05.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.263 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:30:05.263 05:25:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.263 05:25:42 -- nvmf/common.sh@411 -- # return 0 00:30:05.263 05:25:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:05.263 05:25:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.263 05:25:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:05.263 05:25:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:05.263 05:25:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.263 05:25:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:05.263 05:25:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:05.263 05:25:42 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:05.263 05:25:42 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:05.263 05:25:42 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:05.263 05:25:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:05.263 05:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.263 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 ************************************ 00:30:05.525 START TEST nvmf_digest_clean 00:30:05.525 ************************************ 00:30:05.525 05:25:42 -- common/autotest_common.sh@1111 -- # run_digest 00:30:05.525 05:25:42 -- host/digest.sh@120 -- # local dsa_initiator 00:30:05.525 05:25:42 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:05.525 05:25:42 -- host/digest.sh@121 -- # dsa_initiator=false 00:30:05.525 05:25:42 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:05.525 05:25:42 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:05.525 05:25:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:05.525 05:25:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:05.525 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 05:25:42 -- nvmf/common.sh@470 -- # nvmfpid=2011538 00:30:05.525 05:25:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:05.525 05:25:42 -- nvmf/common.sh@471 -- # waitforlisten 2011538 00:30:05.525 05:25:42 -- common/autotest_common.sh@817 -- # '[' -z 2011538 ']' 00:30:05.525 05:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.525 05:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:05.525 05:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.525 05:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:05.525 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 [2024-04-24 05:25:42.641661] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:05.525 [2024-04-24 05:25:42.641745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.525 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.525 [2024-04-24 05:25:42.681200] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:05.525 [2024-04-24 05:25:42.712793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.782 [2024-04-24 05:25:42.804732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.782 [2024-04-24 05:25:42.804793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.782 [2024-04-24 05:25:42.804820] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.782 [2024-04-24 05:25:42.804833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.782 [2024-04-24 05:25:42.804845] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.782 [2024-04-24 05:25:42.804881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.782 05:25:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:05.782 05:25:42 -- common/autotest_common.sh@850 -- # return 0 00:30:05.782 05:25:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:05.782 05:25:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:05.782 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.782 05:25:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.782 05:25:42 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:05.783 05:25:42 -- host/digest.sh@126 -- # common_target_config 00:30:05.783 05:25:42 -- host/digest.sh@43 -- # rpc_cmd 00:30:05.783 05:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.783 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.783 null0 00:30:05.783 [2024-04-24 05:25:42.961841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.783 [2024-04-24 05:25:42.986063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.783 05:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.783 05:25:42 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:05.783 05:25:42 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:05.783 05:25:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:05.783 05:25:42 -- host/digest.sh@80 -- # rw=randread 00:30:05.783 05:25:42 -- host/digest.sh@80 -- # bs=4096 00:30:05.783 05:25:42 -- host/digest.sh@80 -- # qd=128 00:30:05.783 05:25:42 -- host/digest.sh@80 -- # scan_dsa=false 00:30:05.783 05:25:42 -- host/digest.sh@83 -- # bperfpid=2011565 00:30:05.783 05:25:42 -- host/digest.sh@84 -- # waitforlisten 2011565 /var/tmp/bperf.sock 00:30:05.783 05:25:42 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:05.783 05:25:42 -- common/autotest_common.sh@817 -- # '[' -z 2011565 ']' 00:30:05.783 05:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.783 05:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:05.783 05:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.783 05:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:05.783 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.783 [2024-04-24 05:25:43.034711] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:05.783 [2024-04-24 05:25:43.034779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011565 ] 00:30:06.041 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.041 [2024-04-24 05:25:43.068198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:06.041 [2024-04-24 05:25:43.097995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.041 [2024-04-24 05:25:43.186535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.041 05:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:06.041 05:25:43 -- common/autotest_common.sh@850 -- # return 0 00:30:06.041 05:25:43 -- host/digest.sh@86 -- # false 00:30:06.041 05:25:43 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:06.041 05:25:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:06.299 05:25:43 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.299 05:25:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.865 nvme0n1 00:30:06.865 05:25:43 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:06.865 05:25:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:06.865 Running I/O for 2 seconds... 00:30:08.770 00:30:08.770 Latency(us) 00:30:08.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.770 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:08.770 nvme0n1 : 2.00 18826.42 73.54 0.00 0.00 6790.88 3252.53 14757.74 00:30:08.770 =================================================================================================================== 00:30:08.770 Total : 18826.42 73.54 0.00 0.00 6790.88 3252.53 14757.74 00:30:08.770 0 00:30:08.770 05:25:46 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:08.770 05:25:46 -- host/digest.sh@93 -- # get_accel_stats 00:30:08.770 05:25:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:08.770 05:25:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:08.770 | select(.opcode=="crc32c") 00:30:08.770 | "\(.module_name) \(.executed)"' 00:30:08.770 05:25:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:09.029 05:25:46 -- host/digest.sh@94 -- # false 00:30:09.029 05:25:46 -- host/digest.sh@94 -- # exp_module=software 00:30:09.029 05:25:46 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:09.029 05:25:46 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:09.029 05:25:46 -- host/digest.sh@98 -- # killprocess 2011565 00:30:09.029 05:25:46 -- common/autotest_common.sh@936 -- # '[' -z 2011565 ']' 00:30:09.029 05:25:46 -- common/autotest_common.sh@940 -- # kill -0 2011565 00:30:09.029 05:25:46 -- common/autotest_common.sh@941 -- # uname 00:30:09.029 05:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:09.029 05:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2011565 00:30:09.029 05:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:09.288 05:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:09.288 05:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2011565' 00:30:09.288 killing process with pid 2011565 00:30:09.288 05:25:46 -- common/autotest_common.sh@955 -- # kill 2011565 00:30:09.288 Received shutdown signal, test time was about 2.000000 seconds 00:30:09.288 00:30:09.288 Latency(us) 00:30:09.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.288 =================================================================================================================== 00:30:09.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.288 05:25:46 -- common/autotest_common.sh@960 -- # wait 2011565 00:30:09.288 05:25:46 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:09.288 05:25:46 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:09.288 05:25:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:09.288 05:25:46 -- host/digest.sh@80 -- # rw=randread 00:30:09.288 05:25:46 -- host/digest.sh@80 -- # bs=131072 00:30:09.288 05:25:46 -- host/digest.sh@80 -- # qd=16 00:30:09.288 05:25:46 -- host/digest.sh@80 -- # scan_dsa=false 00:30:09.288 05:25:46 -- host/digest.sh@83 -- # bperfpid=2011972 00:30:09.288 05:25:46 -- host/digest.sh@84 -- # waitforlisten 2011972 /var/tmp/bperf.sock 00:30:09.288 05:25:46 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:09.288 05:25:46 -- common/autotest_common.sh@817 -- # '[' -z 2011972 ']' 00:30:09.288 05:25:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:09.288 05:25:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:09.288 05:25:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:09.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:09.288 05:25:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:09.288 05:25:46 -- common/autotest_common.sh@10 -- # set +x 00:30:09.547 [2024-04-24 05:25:46.564313] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:09.547 [2024-04-24 05:25:46.564404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011972 ] 00:30:09.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:09.547 Zero copy mechanism will not be used. 00:30:09.547 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.547 [2024-04-24 05:25:46.596027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:09.547 [2024-04-24 05:25:46.627026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.547 [2024-04-24 05:25:46.712634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.547 05:25:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:09.547 05:25:46 -- common/autotest_common.sh@850 -- # return 0 00:30:09.547 05:25:46 -- host/digest.sh@86 -- # false 00:30:09.547 05:25:46 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:09.547 05:25:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:10.114 05:25:47 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.114 05:25:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.373 nvme0n1 00:30:10.373 05:25:47 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:10.373 05:25:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.373 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:10.373 Zero copy mechanism will not be used. 00:30:10.373 Running I/O for 2 seconds... 00:30:12.902 00:30:12.902 Latency(us) 00:30:12.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.902 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:12.902 nvme0n1 : 2.00 3320.21 415.03 0.00 0.00 4814.53 4369.07 9126.49 00:30:12.902 =================================================================================================================== 00:30:12.902 Total : 3320.21 415.03 0.00 0.00 4814.53 4369.07 9126.49 00:30:12.902 0 00:30:12.902 05:25:49 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:12.902 05:25:49 -- host/digest.sh@93 -- # get_accel_stats 00:30:12.902 05:25:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.902 05:25:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.902 05:25:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.902 | select(.opcode=="crc32c") 00:30:12.902 | "\(.module_name) \(.executed)"' 00:30:12.902 05:25:49 -- host/digest.sh@94 -- # false 00:30:12.902 05:25:49 -- host/digest.sh@94 -- # exp_module=software 00:30:12.902 05:25:49 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:12.902 05:25:49 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.902 05:25:49 -- host/digest.sh@98 -- # killprocess 2011972 00:30:12.902 05:25:49 -- common/autotest_common.sh@936 -- # '[' -z 2011972 ']' 00:30:12.902 05:25:49 -- common/autotest_common.sh@940 -- # kill -0 2011972 00:30:12.902 05:25:49 -- common/autotest_common.sh@941 -- # uname 00:30:12.902 05:25:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.902 05:25:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2011972 00:30:12.902 05:25:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:12.902 05:25:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:12.902 05:25:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2011972' 00:30:12.902 killing process with pid 2011972 00:30:12.902 05:25:49 -- common/autotest_common.sh@955 -- # kill 2011972 00:30:12.902 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.902 00:30:12.902 Latency(us) 00:30:12.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.902 =================================================================================================================== 00:30:12.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.902 05:25:49 -- common/autotest_common.sh@960 -- # wait 2011972 00:30:12.902 05:25:50 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:12.902 05:25:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:12.902 05:25:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:12.902 05:25:50 -- host/digest.sh@80 -- # rw=randwrite 00:30:12.902 05:25:50 -- host/digest.sh@80 -- # bs=4096 00:30:12.902 05:25:50 -- host/digest.sh@80 -- # qd=128 00:30:12.902 05:25:50 -- host/digest.sh@80 -- # scan_dsa=false 00:30:12.902 05:25:50 -- host/digest.sh@83 -- # bperfpid=2012376 00:30:12.902 05:25:50 -- host/digest.sh@84 -- # waitforlisten 2012376 /var/tmp/bperf.sock 00:30:12.902 05:25:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:12.902 05:25:50 -- common/autotest_common.sh@817 -- # '[' -z 2012376 ']' 00:30:12.902 05:25:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.902 05:25:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:12.902 05:25:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.902 05:25:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:12.902 05:25:50 -- common/autotest_common.sh@10 -- # set +x 00:30:12.902 [2024-04-24 05:25:50.134426] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:12.902 [2024-04-24 05:25:50.134506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012376 ] 00:30:12.902 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.902 [2024-04-24 05:25:50.166591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:13.161 [2024-04-24 05:25:50.195545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.161 [2024-04-24 05:25:50.284519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.161 05:25:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:13.161 05:25:50 -- common/autotest_common.sh@850 -- # return 0 00:30:13.161 05:25:50 -- host/digest.sh@86 -- # false 00:30:13.161 05:25:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:13.161 05:25:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.419 05:25:50 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.419 05:25:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.985 nvme0n1 00:30:13.985 05:25:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:13.985 05:25:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:13.985 Running I/O for 2 seconds... 00:30:15.883 00:30:15.883 Latency(us) 00:30:15.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.883 nvme0n1 : 2.01 20085.25 78.46 0.00 0.00 6365.65 3228.25 11650.84 00:30:15.883 =================================================================================================================== 00:30:15.883 Total : 20085.25 78.46 0.00 0.00 6365.65 3228.25 11650.84 00:30:15.883 0 00:30:15.883 05:25:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:15.883 05:25:53 -- host/digest.sh@93 -- # get_accel_stats 00:30:15.883 05:25:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:15.883 05:25:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:15.883 05:25:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:15.883 | select(.opcode=="crc32c") 00:30:15.883 | "\(.module_name) \(.executed)"' 00:30:16.172 05:25:53 -- host/digest.sh@94 -- # false 00:30:16.172 05:25:53 -- host/digest.sh@94 -- # exp_module=software 00:30:16.172 05:25:53 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:16.172 05:25:53 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:16.172 05:25:53 -- host/digest.sh@98 -- # killprocess 2012376 00:30:16.172 05:25:53 -- common/autotest_common.sh@936 -- # '[' -z 2012376 ']' 00:30:16.172 05:25:53 -- common/autotest_common.sh@940 -- # kill -0 2012376 00:30:16.172 05:25:53 -- common/autotest_common.sh@941 -- # uname 00:30:16.172 05:25:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:16.172 05:25:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2012376 00:30:16.172 05:25:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:16.172 05:25:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:16.172 05:25:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2012376' 00:30:16.172 killing process with pid 2012376 00:30:16.172 05:25:53 -- common/autotest_common.sh@955 -- # kill 2012376 00:30:16.172 Received shutdown signal, test time was about 2.000000 seconds 00:30:16.172 00:30:16.172 Latency(us) 00:30:16.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.172 =================================================================================================================== 00:30:16.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.172 05:25:53 -- common/autotest_common.sh@960 -- # wait 2012376 00:30:16.430 05:25:53 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:16.430 05:25:53 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:16.430 05:25:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:16.430 05:25:53 -- host/digest.sh@80 -- # rw=randwrite 00:30:16.430 05:25:53 -- host/digest.sh@80 -- # bs=131072 00:30:16.430 05:25:53 -- host/digest.sh@80 -- # qd=16 00:30:16.430 05:25:53 -- host/digest.sh@80 -- # scan_dsa=false 00:30:16.430 05:25:53 -- host/digest.sh@83 -- # bperfpid=2012803 00:30:16.430 05:25:53 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:16.430 05:25:53 -- host/digest.sh@84 -- # waitforlisten 2012803 /var/tmp/bperf.sock 00:30:16.430 05:25:53 -- common/autotest_common.sh@817 -- # '[' -z 2012803 ']' 00:30:16.430 05:25:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.430 05:25:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:16.430 05:25:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.430 05:25:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:16.430 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:30:16.430 [2024-04-24 05:25:53.673308] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:16.430 [2024-04-24 05:25:53.673389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012803 ] 00:30:16.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:16.430 Zero copy mechanism will not be used. 00:30:16.688 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.688 [2024-04-24 05:25:53.708652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:16.688 [2024-04-24 05:25:53.741191] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.688 [2024-04-24 05:25:53.835953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.688 05:25:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:16.688 05:25:53 -- common/autotest_common.sh@850 -- # return 0 00:30:16.688 05:25:53 -- host/digest.sh@86 -- # false 00:30:16.688 05:25:53 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:16.688 05:25:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:17.253 05:25:54 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.253 05:25:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.511 nvme0n1 00:30:17.511 05:25:54 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:17.511 05:25:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:17.768 Zero copy mechanism will not be used. 00:30:17.768 Running I/O for 2 seconds... 00:30:19.661 00:30:19.661 Latency(us) 00:30:19.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:19.661 nvme0n1 : 2.01 2777.45 347.18 0.00 0.00 5747.66 4344.79 9563.40 00:30:19.661 =================================================================================================================== 00:30:19.661 Total : 2777.45 347.18 0.00 0.00 5747.66 4344.79 9563.40 00:30:19.661 0 00:30:19.661 05:25:56 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:19.661 05:25:56 -- host/digest.sh@93 -- # get_accel_stats 00:30:19.661 05:25:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:19.661 05:25:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:19.661 05:25:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:19.661 | select(.opcode=="crc32c") 00:30:19.661 | "\(.module_name) \(.executed)"' 00:30:19.919 05:25:57 -- host/digest.sh@94 -- # false 00:30:19.919 05:25:57 -- host/digest.sh@94 -- # exp_module=software 00:30:19.919 05:25:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:19.919 05:25:57 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:19.919 05:25:57 -- host/digest.sh@98 -- # killprocess 2012803 00:30:19.919 05:25:57 -- common/autotest_common.sh@936 -- # '[' -z 2012803 ']' 00:30:19.919 05:25:57 -- common/autotest_common.sh@940 -- # kill -0 2012803 00:30:19.919 05:25:57 -- common/autotest_common.sh@941 -- # uname 00:30:19.919 05:25:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:19.919 05:25:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2012803 00:30:19.919 05:25:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:19.919 05:25:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:19.919 05:25:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2012803' 00:30:19.919 killing process with pid 2012803 00:30:19.919 05:25:57 -- common/autotest_common.sh@955 -- # kill 2012803 00:30:19.919 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.919 00:30:19.919 Latency(us) 00:30:19.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.919 =================================================================================================================== 00:30:19.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.919 05:25:57 -- common/autotest_common.sh@960 -- # wait 2012803 00:30:20.176 05:25:57 -- host/digest.sh@132 -- # killprocess 2011538 00:30:20.176 05:25:57 -- common/autotest_common.sh@936 -- # '[' -z 2011538 ']' 00:30:20.176 05:25:57 -- common/autotest_common.sh@940 -- # kill -0 2011538 00:30:20.176 05:25:57 -- common/autotest_common.sh@941 -- # uname 00:30:20.176 05:25:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:20.176 05:25:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2011538 00:30:20.176 05:25:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:20.176 05:25:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:20.176 05:25:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2011538' 00:30:20.176 killing process with pid 2011538 00:30:20.176 05:25:57 -- common/autotest_common.sh@955 -- # kill 2011538 00:30:20.176 05:25:57 -- common/autotest_common.sh@960 -- # wait 2011538 00:30:20.434 00:30:20.434 real 0m14.978s 00:30:20.434 user 0m30.118s 00:30:20.434 sys 0m3.802s 00:30:20.434 05:25:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:20.434 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.434 ************************************ 00:30:20.434 END TEST nvmf_digest_clean 00:30:20.434 ************************************ 00:30:20.434 05:25:57 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:20.434 05:25:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:20.434 05:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:20.435 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.435 ************************************ 00:30:20.435 START TEST nvmf_digest_error 00:30:20.435 ************************************ 00:30:20.435 05:25:57 -- common/autotest_common.sh@1111 -- # run_digest_error 00:30:20.435 05:25:57 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:20.435 05:25:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:20.435 05:25:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:20.435 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.435 05:25:57 -- nvmf/common.sh@470 -- # nvmfpid=2013346 00:30:20.435 05:25:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:20.435 05:25:57 -- nvmf/common.sh@471 -- # waitforlisten 2013346 00:30:20.435 05:25:57 -- common/autotest_common.sh@817 -- # '[' -z 2013346 ']' 00:30:20.693 05:25:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.693 05:25:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:20.693 05:25:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.693 05:25:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:20.693 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.693 [2024-04-24 05:25:57.748872] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:20.693 [2024-04-24 05:25:57.748970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.693 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.693 [2024-04-24 05:25:57.788431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:20.693 [2024-04-24 05:25:57.814748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.693 [2024-04-24 05:25:57.898463] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.693 [2024-04-24 05:25:57.898529] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.693 [2024-04-24 05:25:57.898559] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.693 [2024-04-24 05:25:57.898571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.693 [2024-04-24 05:25:57.898590] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.693 [2024-04-24 05:25:57.898620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.693 05:25:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:20.693 05:25:57 -- common/autotest_common.sh@850 -- # return 0 00:30:20.693 05:25:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:20.693 05:25:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:20.693 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 05:25:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.951 05:25:57 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:20.951 05:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.951 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 [2024-04-24 05:25:57.987301] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:20.951 05:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.951 05:25:57 -- host/digest.sh@105 -- # common_target_config 00:30:20.951 05:25:57 -- host/digest.sh@43 -- # rpc_cmd 00:30:20.951 05:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.951 05:25:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 null0 00:30:20.951 [2024-04-24 05:25:58.105865] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.951 [2024-04-24 05:25:58.130102] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.951 05:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.951 05:25:58 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:20.951 05:25:58 -- host/digest.sh@54 -- # local rw bs qd 00:30:20.951 05:25:58 -- host/digest.sh@56 -- # rw=randread 00:30:20.951 05:25:58 -- host/digest.sh@56 -- # bs=4096 00:30:20.951 05:25:58 -- host/digest.sh@56 -- # qd=128 00:30:20.951 05:25:58 -- host/digest.sh@58 -- # bperfpid=2013371 00:30:20.951 05:25:58 -- host/digest.sh@60 -- # waitforlisten 2013371 /var/tmp/bperf.sock 00:30:20.951 05:25:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:20.951 05:25:58 -- common/autotest_common.sh@817 -- # '[' -z 2013371 ']' 00:30:20.951 05:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:20.951 05:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:20.951 05:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:20.951 05:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:20.951 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 [2024-04-24 05:25:58.176849] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:20.951 [2024-04-24 05:25:58.176929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013371 ] 00:30:20.951 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.951 [2024-04-24 05:25:58.209559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:21.209 [2024-04-24 05:25:58.239608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.209 [2024-04-24 05:25:58.328529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.209 05:25:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:21.209 05:25:58 -- common/autotest_common.sh@850 -- # return 0 00:30:21.209 05:25:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:21.209 05:25:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:21.466 05:25:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:21.466 05:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.466 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:21.466 05:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.466 05:25:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.466 05:25:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.031 nvme0n1 00:30:22.031 05:25:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:22.031 05:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.031 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:30:22.031 05:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.031 05:25:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:22.031 05:25:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:22.289 Running I/O for 2 seconds... 00:30:22.289 [2024-04-24 05:25:59.377582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.377657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.377679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.391363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.391393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.391425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.404835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.404867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.404884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.416897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.416958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.431280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.431311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.431343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.444539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.444568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.444602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.456990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.457048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.469474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.469503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.469537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.483729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.483759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.483783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.496238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.496267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.496299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.508569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.508599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.508637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.522598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.522651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.522671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.534256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.534283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.534314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.290 [2024-04-24 05:25:59.548527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.290 [2024-04-24 05:25:59.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.290 [2024-04-24 05:25:59.548588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.561130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.561173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.573990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.574017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.574048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.587276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.587305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.587338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.601058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.601093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.601127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.613841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.613871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.613888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.625349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.625376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.625407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.638805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.638833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.638848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.651443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.651470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.651501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.663953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.663980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.664011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.677799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.677828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.677844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.690772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.690802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.690834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.702884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.702913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.702944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.716647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.716703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.716719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.730230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.730263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.730282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.745899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.745928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.745945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.758149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.758182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.758201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.773394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.548 [2024-04-24 05:25:59.773428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.548 [2024-04-24 05:25:59.773446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.548 [2024-04-24 05:25:59.786613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.549 [2024-04-24 05:25:59.786673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.549 [2024-04-24 05:25:59.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.549 [2024-04-24 05:25:59.800294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.549 [2024-04-24 05:25:59.800327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.549 [2024-04-24 05:25:59.800345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.549 [2024-04-24 05:25:59.815176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.549 [2024-04-24 05:25:59.815209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.549 [2024-04-24 05:25:59.815227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.829612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.829672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.829696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.841254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.841288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.841306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.857073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.857107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.857125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.869982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.870010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.870042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.884934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.901754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.901784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.901800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.913716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.913759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.929098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.929131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.929149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.941032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.941083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.957322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.957355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.957373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.972654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.972687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.972717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.986117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.986151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.986170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:25:59.998345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:25:59.998378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:25:59.998396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:26:00.013162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:26:00.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:26:00.013227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:26:00.027371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:26:00.027408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:26:00.027427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:26:00.040251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:26:00.040285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.807 [2024-04-24 05:26:00.040304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.807 [2024-04-24 05:26:00.054618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.807 [2024-04-24 05:26:00.054674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.808 [2024-04-24 05:26:00.054690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.808 [2024-04-24 05:26:00.068295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:22.808 [2024-04-24 05:26:00.068329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.808 [2024-04-24 05:26:00.068355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.066 [2024-04-24 05:26:00.082303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.066 [2024-04-24 05:26:00.082340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.066 [2024-04-24 05:26:00.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.066 [2024-04-24 05:26:00.096720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.066 [2024-04-24 05:26:00.096751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.066 [2024-04-24 05:26:00.096768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.066 [2024-04-24 05:26:00.110418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.066 [2024-04-24 05:26:00.110453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.066 [2024-04-24 05:26:00.110472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.066 [2024-04-24 05:26:00.124268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.066 [2024-04-24 05:26:00.124302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.066 [2024-04-24 05:26:00.124321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.139084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.139117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.139136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.154472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.154506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.154525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.168132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.168166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.168185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.182206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.182240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.182258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.194850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.194882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.194897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.211421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.211455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.211474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.225640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.225688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.225705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.239399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.239432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.239450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.253535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.253568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.253587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.269281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.269314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.269332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.281412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.281445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.281465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.296677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.296706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.296722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.311734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.311764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.311782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.067 [2024-04-24 05:26:00.323926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.067 [2024-04-24 05:26:00.323953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.067 [2024-04-24 05:26:00.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.338836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.338868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.338902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.353686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.353715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.353731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.368626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.368681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.380106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.380141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.380160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.396871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.396900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.396934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.411029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.411062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.425608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.425651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.425693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.437315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.437349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.437374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.452482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.452518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.466674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.466703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.466719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.481659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.481704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.481721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.492712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.492745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.492763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.508956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.509001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.509020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.522314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.326 [2024-04-24 05:26:00.522349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.326 [2024-04-24 05:26:00.522367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.326 [2024-04-24 05:26:00.537062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.327 [2024-04-24 05:26:00.537096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.327 [2024-04-24 05:26:00.537114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.327 [2024-04-24 05:26:00.552022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.327 [2024-04-24 05:26:00.552069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.327 [2024-04-24 05:26:00.552088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.327 [2024-04-24 05:26:00.565682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.327 [2024-04-24 05:26:00.565717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.327 [2024-04-24 05:26:00.565735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.327 [2024-04-24 05:26:00.578399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.327 [2024-04-24 05:26:00.578434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.327 [2024-04-24 05:26:00.578452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.327 [2024-04-24 05:26:00.594734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.327 [2024-04-24 05:26:00.594769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.327 [2024-04-24 05:26:00.594787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.606874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.606904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.606935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.623096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.623131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.623149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.637845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.637878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.637896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.650075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.650104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.650120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.665709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.665739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.680757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.680786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.680828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.693005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.693039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.693058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.708080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.708113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.708131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.722299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.722352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.737188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.737226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.737245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.749860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.585 [2024-04-24 05:26:00.749905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.585 [2024-04-24 05:26:00.749921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.585 [2024-04-24 05:26:00.765903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.765946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.765961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.777410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.777444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.777463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.793261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.793295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.808706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.808741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.808758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.821892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.821937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.821956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.837215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.837249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.837267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.586 [2024-04-24 05:26:00.851849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.586 [2024-04-24 05:26:00.851891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.586 [2024-04-24 05:26:00.851918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.865710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.865742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.865759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.880421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.880474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.894809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.894839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.894856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.907124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.907158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.907176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.923248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.923282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.923301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.939219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.939253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.939272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.844 [2024-04-24 05:26:00.951299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.844 [2024-04-24 05:26:00.951333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.844 [2024-04-24 05:26:00.951351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:00.968469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:00.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:00.968521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:00.982706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:00.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:00.982754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:00.994570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:00.994603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:00.994621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.008778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.008807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.008839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.024199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.024233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.024252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.037252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.037285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.051593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.051626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.051675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.066909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.066956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.066975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.078819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.078862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.078878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.095697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.095726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.095756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.845 [2024-04-24 05:26:01.108086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:23.845 [2024-04-24 05:26:01.108118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-04-24 05:26:01.108137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.123842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.123872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.137887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.137921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.137940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.152470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.152504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.152523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.164670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.164717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.164733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.180861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.180895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.180927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.195849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.195879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.195896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.208266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.208301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.208319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.103 [2024-04-24 05:26:01.222405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.103 [2024-04-24 05:26:01.222438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.103 [2024-04-24 05:26:01.222458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.237163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.237216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.251994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.252027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.252046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.266271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.266304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.266323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.278315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.278350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.278368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.293686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.293715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.308282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.308314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.308333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.320556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.320590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.320608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.336960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.337013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.350854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.350883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.104 [2024-04-24 05:26:01.361012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f1c980) 00:30:24.104 [2024-04-24 05:26:01.361045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.104 [2024-04-24 05:26:01.361064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.362 00:30:24.362 Latency(us) 00:30:24.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.362 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:24.362 nvme0n1 : 2.04 17837.39 69.68 0.00 0.00 7025.96 3689.43 46409.20 00:30:24.362 =================================================================================================================== 00:30:24.362 Total : 17837.39 69.68 0.00 0.00 7025.96 3689.43 46409.20 00:30:24.362 0 00:30:24.362 05:26:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:24.362 05:26:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:24.362 05:26:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:24.362 05:26:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:24.362 | .driver_specific 00:30:24.362 | .nvme_error 00:30:24.362 | .status_code 00:30:24.362 | .command_transient_transport_error' 00:30:24.620 05:26:01 -- host/digest.sh@71 -- # (( 143 > 0 )) 00:30:24.620 05:26:01 -- host/digest.sh@73 -- # killprocess 2013371 00:30:24.620 05:26:01 -- common/autotest_common.sh@936 -- # '[' -z 2013371 ']' 00:30:24.620 05:26:01 -- common/autotest_common.sh@940 -- # kill -0 2013371 00:30:24.620 05:26:01 -- common/autotest_common.sh@941 -- # uname 00:30:24.620 05:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:24.620 05:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2013371 00:30:24.620 05:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:24.620 05:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:24.620 05:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2013371' 00:30:24.620 killing process with pid 2013371 00:30:24.620 05:26:01 -- common/autotest_common.sh@955 -- # kill 2013371 00:30:24.620 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.620 00:30:24.620 Latency(us) 00:30:24.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.620 =================================================================================================================== 00:30:24.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.620 05:26:01 -- common/autotest_common.sh@960 -- # wait 2013371 00:30:24.879 05:26:01 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:24.879 05:26:01 -- host/digest.sh@54 -- # local rw bs qd 00:30:24.879 05:26:01 -- host/digest.sh@56 -- # rw=randread 00:30:24.879 05:26:01 -- host/digest.sh@56 -- # bs=131072 00:30:24.879 05:26:01 -- host/digest.sh@56 -- # qd=16 00:30:24.879 05:26:01 -- host/digest.sh@58 -- # bperfpid=2013896 00:30:24.879 05:26:01 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:24.879 05:26:01 -- host/digest.sh@60 -- # waitforlisten 2013896 /var/tmp/bperf.sock 00:30:24.879 05:26:01 -- common/autotest_common.sh@817 -- # '[' -z 2013896 ']' 00:30:24.879 05:26:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.879 05:26:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:24.879 05:26:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.879 05:26:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:24.879 05:26:01 -- common/autotest_common.sh@10 -- # set +x 00:30:24.879 [2024-04-24 05:26:01.950836] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:24.879 [2024-04-24 05:26:01.950914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013896 ] 00:30:24.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.879 Zero copy mechanism will not be used. 00:30:24.879 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.879 [2024-04-24 05:26:01.981765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:24.879 [2024-04-24 05:26:02.013116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.879 [2024-04-24 05:26:02.099939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.138 05:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:25.138 05:26:02 -- common/autotest_common.sh@850 -- # return 0 00:30:25.138 05:26:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.138 05:26:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.395 05:26:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:25.395 05:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.395 05:26:02 -- common/autotest_common.sh@10 -- # set +x 00:30:25.395 05:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.395 05:26:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.395 05:26:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.651 nvme0n1 00:30:25.651 05:26:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:25.651 05:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.651 05:26:02 -- common/autotest_common.sh@10 -- # set +x 00:30:25.651 05:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.651 05:26:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:25.651 05:26:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:25.909 Zero copy mechanism will not be used. 00:30:25.909 Running I/O for 2 seconds... 00:30:25.909 [2024-04-24 05:26:02.955241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:02.955297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:02.955320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:02.965365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:02.965402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:02.965422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:02.975270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:02.975304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:02.975324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:02.985073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:02.985107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:02.985127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:02.995038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:02.995072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:02.995092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.004949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.004978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.005010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.014912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.014956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.014974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.024839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.024883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.024909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.034952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.034981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.035014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.044949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.044979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.045014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.054996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.055030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.055049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.064777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.064820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.909 [2024-04-24 05:26:03.064837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.909 [2024-04-24 05:26:03.074486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.909 [2024-04-24 05:26:03.074519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.074538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.084265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.084299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.094046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.094079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.094098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.103873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.103917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.103933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.113929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.113989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.114009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.123939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.123968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.124001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.133857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.133902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.133919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.143860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.143888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.153542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.153593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.163371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.163403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.163422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.910 [2024-04-24 05:26:03.173129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:25.910 [2024-04-24 05:26:03.173164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.910 [2024-04-24 05:26:03.173183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.183027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.183073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.192887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.192936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.202572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.202607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.202626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.212903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.212933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.212969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.222850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.222881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.222898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.233375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.233410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.233429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.243763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.243793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.243810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.253544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.253577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.253596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.263350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.263402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.273170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.273204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.273223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.282921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.282950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.282975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.292660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.292707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.292723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.302348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.302382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.302401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.312435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.312468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.312487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.323147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.323182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.323202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.333289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.333322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.333342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.343789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.343819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.343836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.353527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.353560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.363272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.363305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.363324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.373058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.373091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.373110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.382848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.392500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.392533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.392552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.402193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.402225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.402244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.412170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.412203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.412222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.422656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.422705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-04-24 05:26:03.422723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-04-24 05:26:03.433359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.168 [2024-04-24 05:26:03.433393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-04-24 05:26:03.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.443964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.444001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.426 [2024-04-24 05:26:03.444021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.453914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.426 [2024-04-24 05:26:03.453969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.463804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.463833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.426 [2024-04-24 05:26:03.463850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.473365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.426 [2024-04-24 05:26:03.473418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.483127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.483161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.426 [2024-04-24 05:26:03.483181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.426 [2024-04-24 05:26:03.492899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.426 [2024-04-24 05:26:03.492928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.492945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.502653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.502702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.502720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.512868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.512899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.512930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.523274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.523309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.523329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.533557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.533591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.533610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.544065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.544108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.544129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.553799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.553833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.553850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.563408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.563443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.573150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.573183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.573203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.582885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.582932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.582949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.592606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.592674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.592693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.602245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.602278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.602297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.612058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.612091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.612110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.621901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.621969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.631603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.631655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.631690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.641313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.641346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.641365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.651065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.651098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.651117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.660933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.660978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.660997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.670846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.670876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.670893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.680606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.680657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.680692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.427 [2024-04-24 05:26:03.690309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.427 [2024-04-24 05:26:03.690342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.427 [2024-04-24 05:26:03.690361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.685 [2024-04-24 05:26:03.700226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.685 [2024-04-24 05:26:03.700262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.685 [2024-04-24 05:26:03.700282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.685 [2024-04-24 05:26:03.710003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.685 [2024-04-24 05:26:03.710038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.685 [2024-04-24 05:26:03.710066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.685 [2024-04-24 05:26:03.719761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.685 [2024-04-24 05:26:03.719792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.685 [2024-04-24 05:26:03.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.685 [2024-04-24 05:26:03.729543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.685 [2024-04-24 05:26:03.729577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.685 [2024-04-24 05:26:03.729596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.739601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.739643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.739665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.749345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.749379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.749398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.759109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.759142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.759161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.768823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.768852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.768884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.778674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.778705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.778722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.788385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.788429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.788445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.798130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.798173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.798192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.807863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.807892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.807925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.817685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.817715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.817733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.827353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.827386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.827404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.837084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.837116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.837135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.846813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.846842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.846874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.856505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.856539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.856558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.866278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.866312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.866330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.876057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.876092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.876111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.885888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.885935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.885952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.895577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.895610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.895639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.905241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.905275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.905294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.914970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.915016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.915035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.924849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.924879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.924913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.934475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.934527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.944363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.944397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.944417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.686 [2024-04-24 05:26:03.954075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.686 [2024-04-24 05:26:03.954113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.686 [2024-04-24 05:26:03.954145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:03.963878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:03.963917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:03.963937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:03.973906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:03.973951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:03.973968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:03.983592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:03.983626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:03.983656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:03.993419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:03.993452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:03.993471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.003047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.003080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.003100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.012919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.012949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.012983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.022700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.022729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.022760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.032455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.032488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.032507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.042200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.042233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.042252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.051846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.051875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.051907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.061489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.061522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.061542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.944 [2024-04-24 05:26:04.071200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.944 [2024-04-24 05:26:04.071233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.944 [2024-04-24 05:26:04.071251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.080876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.080906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.080939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.090618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.090661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.090681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.100314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.100348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.100367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.109918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.109947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.109981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.119546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.119579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.119597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.129188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.129221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.129248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.138996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.139028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.139047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.148742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.148771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.158431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.158464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.158483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.168113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.168147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.168166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.177947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.177976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.187830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.187875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.187893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.197636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.197669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.197688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.945 [2024-04-24 05:26:04.207330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:26.945 [2024-04-24 05:26:04.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.945 [2024-04-24 05:26:04.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.217188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.217244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.217278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.226898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.226945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.236443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.236478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.236497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.246147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.246181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.246200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.255835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.255866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.255883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.265922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.265970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.265989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.275704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.275734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.275750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.285438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.285473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.285492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.295228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.295262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.295281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.304906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.304953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.304973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.314603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.314646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.314668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.324264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.324298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.324317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.334108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.334142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.334161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.343908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.202 [2024-04-24 05:26:04.343955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.202 [2024-04-24 05:26:04.343973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.202 [2024-04-24 05:26:04.353605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.353649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.353671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.363362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.363396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.373057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.373092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.373112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.382727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.382757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.382798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.392461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.392495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.392513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.402220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.402255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.402274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.412017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.412051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.412070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.421940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.421987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.422006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.431620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.431677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.431694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.441324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.441357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.441375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.451069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.451102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.451121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.460776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.460806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.460837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.203 [2024-04-24 05:26:04.470494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.203 [2024-04-24 05:26:04.470531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.203 [2024-04-24 05:26:04.470551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.480318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.480355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.489892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.489939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.489956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.499654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.499702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.499720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.509381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.509415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.509434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.519256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.519290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.519310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.528950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.528997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.529016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.538675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.538720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.538737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.548378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.548410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.548438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.558058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.558092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.558111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.567670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.567715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.567733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.577362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.577396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.577415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.587080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.587114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.596833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.596863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.596880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.606599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.606641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.616436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.616469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.616488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.626204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.626238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.626257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.635941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.635997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.636017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.645833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.645864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.645881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.655462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.655495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.655514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.665309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.665342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.665361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.674993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.675027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.675046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.684946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.684995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.685014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.694729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.694758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.694775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.704506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.704540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.704559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.714345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.714379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.714398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.461 [2024-04-24 05:26:04.724202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.461 [2024-04-24 05:26:04.724235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.461 [2024-04-24 05:26:04.724255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.734049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.734087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.734107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.743745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.743781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.753482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.753517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.753537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.763378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.763413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.763433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.773302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.773336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.773355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.783056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.720 [2024-04-24 05:26:04.783090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.720 [2024-04-24 05:26:04.783109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.720 [2024-04-24 05:26:04.792986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.793018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.793051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.802811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.802842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.802875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.811817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.811848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.811865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.820714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.820744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.829620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.829660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.829678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.838519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.838563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.838580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.847318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.847349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.847366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.856109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.856141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.856158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.865090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.865122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.865138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.873894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.873941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.873959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.882726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.882759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.882777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.891712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.891745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.891762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.900703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.900749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.900767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.909667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.909700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.909717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.918530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.918576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.918593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.927522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.927568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.927586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.936659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.936692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.936709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.721 [2024-04-24 05:26:04.945473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb165f0) 00:30:27.721 [2024-04-24 05:26:04.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.721 [2024-04-24 05:26:04.945536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.721 00:30:27.721 Latency(us) 00:30:27.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.721 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:27.721 nvme0n1 : 2.00 3180.70 397.59 0.00 0.00 5026.22 4296.25 11262.48 00:30:27.721 =================================================================================================================== 00:30:27.721 Total : 3180.70 397.59 0.00 0.00 5026.22 4296.25 11262.48 00:30:27.721 0 00:30:27.721 05:26:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:27.721 05:26:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:27.721 05:26:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:27.721 | .driver_specific 00:30:27.721 | .nvme_error 00:30:27.721 | .status_code 00:30:27.721 | .command_transient_transport_error' 00:30:27.721 05:26:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:27.979 05:26:05 -- host/digest.sh@71 -- # (( 205 > 0 )) 00:30:27.979 05:26:05 -- host/digest.sh@73 -- # killprocess 2013896 00:30:27.979 05:26:05 -- common/autotest_common.sh@936 -- # '[' -z 2013896 ']' 00:30:27.979 05:26:05 -- common/autotest_common.sh@940 -- # kill -0 2013896 00:30:27.979 05:26:05 -- common/autotest_common.sh@941 -- # uname 00:30:27.979 05:26:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:27.979 05:26:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2013896 00:30:28.237 05:26:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:28.237 05:26:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:28.238 05:26:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2013896' 00:30:28.238 killing process with pid 2013896 00:30:28.238 05:26:05 -- common/autotest_common.sh@955 -- # kill 2013896 00:30:28.238 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.238 00:30:28.238 Latency(us) 00:30:28.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.238 =================================================================================================================== 00:30:28.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.238 05:26:05 -- common/autotest_common.sh@960 -- # wait 2013896 00:30:28.238 05:26:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:28.238 05:26:05 -- host/digest.sh@54 -- # local rw bs qd 00:30:28.238 05:26:05 -- host/digest.sh@56 -- # rw=randwrite 00:30:28.238 05:26:05 -- host/digest.sh@56 -- # bs=4096 00:30:28.238 05:26:05 -- host/digest.sh@56 -- # qd=128 00:30:28.238 05:26:05 -- host/digest.sh@58 -- # bperfpid=2014299 00:30:28.238 05:26:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:28.238 05:26:05 -- host/digest.sh@60 -- # waitforlisten 2014299 /var/tmp/bperf.sock 00:30:28.238 05:26:05 -- common/autotest_common.sh@817 -- # '[' -z 2014299 ']' 00:30:28.238 05:26:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:28.238 05:26:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:28.238 05:26:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:28.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:28.238 05:26:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:28.238 05:26:05 -- common/autotest_common.sh@10 -- # set +x 00:30:28.499 [2024-04-24 05:26:05.526295] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:28.499 [2024-04-24 05:26:05.526370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014299 ] 00:30:28.499 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.499 [2024-04-24 05:26:05.558374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.499 [2024-04-24 05:26:05.586183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.499 [2024-04-24 05:26:05.668901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.763 05:26:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:28.763 05:26:05 -- common/autotest_common.sh@850 -- # return 0 00:30:28.763 05:26:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.763 05:26:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.763 05:26:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:28.763 05:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.763 05:26:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.021 05:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.021 05:26:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.021 05:26:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.588 nvme0n1 00:30:29.588 05:26:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:29.588 05:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.588 05:26:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.588 05:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.588 05:26:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:29.588 05:26:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:29.588 Running I/O for 2 seconds... 00:30:29.588 [2024-04-24 05:26:06.704908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190df988 00:30:29.588 [2024-04-24 05:26:06.705818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.588 [2024-04-24 05:26:06.705860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:29.588 [2024-04-24 05:26:06.719205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190df988 00:30:29.588 [2024-04-24 05:26:06.720061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.588 [2024-04-24 05:26:06.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:29.588 [2024-04-24 05:26:06.731855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190de8a8 00:30:29.588 [2024-04-24 05:26:06.732704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.588 [2024-04-24 05:26:06.732733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:29.588 [2024-04-24 05:26:06.749569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e01f8 00:30:29.588 [2024-04-24 05:26:06.751301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.588 [2024-04-24 05:26:06.751334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.588 [2024-04-24 05:26:06.762430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e01f8 00:30:29.589 [2024-04-24 05:26:06.763980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.764007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.779290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e01f8 00:30:29.589 [2024-04-24 05:26:06.781498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.781537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.791433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fe720 00:30:29.589 [2024-04-24 05:26:06.792991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.793024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.805182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190df118 00:30:29.589 [2024-04-24 05:26:06.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.806828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.819164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190eff18 00:30:29.589 [2024-04-24 05:26:06.820888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.820933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.831885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e12d8 00:30:29.589 [2024-04-24 05:26:06.833375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.833407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:29.589 [2024-04-24 05:26:06.845640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f0788 00:30:29.589 [2024-04-24 05:26:06.847112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.589 [2024-04-24 05:26:06.847144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.859370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e1b48 00:30:29.848 [2024-04-24 05:26:06.860879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.860927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.873988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190eb760 00:30:29.848 [2024-04-24 05:26:06.875622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.887604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e6b70 00:30:29.848 [2024-04-24 05:26:06.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.889152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.900964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190eaef0 00:30:29.848 [2024-04-24 05:26:06.902463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.902490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.912179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e73e0 00:30:29.848 [2024-04-24 05:26:06.913043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.913072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.924924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f20d8 00:30:29.848 [2024-04-24 05:26:06.925741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.937978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ff3c8 00:30:29.848 [2024-04-24 05:26:06.938899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.938927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.950921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f2948 00:30:29.848 [2024-04-24 05:26:06.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.951799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.962526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e84c0 00:30:29.848 [2024-04-24 05:26:06.963369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.963396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.977809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e84c0 00:30:29.848 [2024-04-24 05:26:06.979262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.979290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:06.991269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f20d8 00:30:29.848 [2024-04-24 05:26:06.992858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:06.992888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:07.003900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190efae0 00:30:29.848 [2024-04-24 05:26:07.005467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:07.005494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:07.017250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f3a28 00:30:29.848 [2024-04-24 05:26:07.018987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.848 [2024-04-24 05:26:07.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.848 [2024-04-24 05:26:07.029925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f7970 00:30:29.849 [2024-04-24 05:26:07.031636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.031672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.043112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190dece0 00:30:29.849 [2024-04-24 05:26:07.044957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.044984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.054216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fb8b8 00:30:29.849 [2024-04-24 05:26:07.055454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.055481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.066945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190eea00 00:30:29.849 [2024-04-24 05:26:07.068232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.068273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.079585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e5658 00:30:29.849 [2024-04-24 05:26:07.081064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.092474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ee190 00:30:29.849 [2024-04-24 05:26:07.093876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.093904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.105319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f4298 00:30:29.849 [2024-04-24 05:26:07.106658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.849 [2024-04-24 05:26:07.106693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:29.849 [2024-04-24 05:26:07.118204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ed920 00:30:30.109 [2024-04-24 05:26:07.119548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.119599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.129987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f9f68 00:30:30.109 [2024-04-24 05:26:07.131204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.131233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.143302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f81e0 00:30:30.109 [2024-04-24 05:26:07.144657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.144686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.158465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f81e0 00:30:30.109 [2024-04-24 05:26:07.160391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.160432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.171179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e2c28 00:30:30.109 [2024-04-24 05:26:07.173128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.173156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.183882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f8a50 00:30:30.109 [2024-04-24 05:26:07.185881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.185910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.196532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e23b8 00:30:30.109 [2024-04-24 05:26:07.198445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.198472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.205699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f3e60 00:30:30.109 [2024-04-24 05:26:07.206518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.206545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.220214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190e73e0 00:30:30.109 [2024-04-24 05:26:07.221749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.221777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.231318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fcdd0 00:30:30.109 [2024-04-24 05:26:07.232193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.232220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.244131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fac10 00:30:30.109 [2024-04-24 05:26:07.244976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.245004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.257014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ef6a8 00:30:30.109 [2024-04-24 05:26:07.257843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.257871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.269682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ddc00 00:30:30.109 [2024-04-24 05:26:07.270493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.270520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.282241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190eee38 00:30:30.109 [2024-04-24 05:26:07.283096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.283124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.293883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190ee5c8 00:30:30.109 [2024-04-24 05:26:07.294635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.294678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:30.109 [2024-04-24 05:26:07.307156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f2d80 00:30:30.109 [2024-04-24 05:26:07.308160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.109 [2024-04-24 05:26:07.308187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:30.110 [2024-04-24 05:26:07.320984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190f2d80 00:30:30.110 [2024-04-24 05:26:07.321935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.110 [2024-04-24 05:26:07.321978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:30.110 [2024-04-24 05:26:07.335161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.110 [2024-04-24 05:26:07.335446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.110 [2024-04-24 05:26:07.335488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.110 [2024-04-24 05:26:07.350241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.110 [2024-04-24 05:26:07.350544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.110 [2024-04-24 05:26:07.350570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.110 [2024-04-24 05:26:07.365266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.110 [2024-04-24 05:26:07.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.110 [2024-04-24 05:26:07.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.369 [2024-04-24 05:26:07.380482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.369 [2024-04-24 05:26:07.380792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.369 [2024-04-24 05:26:07.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.369 [2024-04-24 05:26:07.395367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.369 [2024-04-24 05:26:07.395648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.369 [2024-04-24 05:26:07.395677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.369 [2024-04-24 05:26:07.410864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.369 [2024-04-24 05:26:07.411151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.369 [2024-04-24 05:26:07.411179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.426092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.426378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.426420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.441504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.441827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.441855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.456726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.457026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.457053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.471861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.472119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.472153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.486550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.486847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.501590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.501856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.501884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.516801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.517088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.517115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.532159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.532461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.532487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.547415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.547703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.547731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.562414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.562717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.562743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.577636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.577938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.592650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.592918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.592946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.607755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.608066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.608093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.622940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.623226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.623253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.370 [2024-04-24 05:26:07.637826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.370 [2024-04-24 05:26:07.638117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.370 [2024-04-24 05:26:07.638148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.629 [2024-04-24 05:26:07.652570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.652900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.667828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.668099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.668126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.682899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.683181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.683209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.697888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.698185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.698212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.712838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.713111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.713136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.727938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.728196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.728223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.742611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.742878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.742905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.757652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.757985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.772806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.773079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.773105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.787954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.788234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.788260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.803079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.803379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.803405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.818075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.818405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.833218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.833478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.833505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.848292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.848587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.863330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.863602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.863657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.878380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.878683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.878710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.630 [2024-04-24 05:26:07.893579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.630 [2024-04-24 05:26:07.893849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.630 [2024-04-24 05:26:07.893877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.890 [2024-04-24 05:26:07.908232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.890 [2024-04-24 05:26:07.908534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.890 [2024-04-24 05:26:07.908563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.890 [2024-04-24 05:26:07.923034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.890 [2024-04-24 05:26:07.923303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.890 [2024-04-24 05:26:07.923330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.890 [2024-04-24 05:26:07.938061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.890 [2024-04-24 05:26:07.938358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.890 [2024-04-24 05:26:07.938386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.890 [2024-04-24 05:26:07.953079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.890 [2024-04-24 05:26:07.953368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.890 [2024-04-24 05:26:07.953396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.890 [2024-04-24 05:26:07.968036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.890 [2024-04-24 05:26:07.968342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.890 [2024-04-24 05:26:07.968370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:07.982851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:07.983140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:07.983168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:07.997608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:07.997875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:07.997923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.012422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.012761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.012788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.027485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.027780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.027808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.042312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.042587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.042614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.057317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.057613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.072231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.072518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.072544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.087403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.087690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.087717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.103181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.103486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.103512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.118692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.118972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.118997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.134234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.134557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.134582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:30.891 [2024-04-24 05:26:08.149920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:30.891 [2024-04-24 05:26:08.150237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.891 [2024-04-24 05:26:08.150267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.149 [2024-04-24 05:26:08.165570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.149 [2024-04-24 05:26:08.165887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.149 [2024-04-24 05:26:08.165930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.149 [2024-04-24 05:26:08.181142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.149 [2024-04-24 05:26:08.181461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.149 [2024-04-24 05:26:08.181488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.149 [2024-04-24 05:26:08.196854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.149 [2024-04-24 05:26:08.197132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.149 [2024-04-24 05:26:08.197158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.149 [2024-04-24 05:26:08.212486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.149 [2024-04-24 05:26:08.212799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.149 [2024-04-24 05:26:08.212826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.149 [2024-04-24 05:26:08.228100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.149 [2024-04-24 05:26:08.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.228423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.243808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.244141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.259168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.259471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.259498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.274728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.275041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.275067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.290339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.290645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.290687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.305995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.306293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.306319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.321641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.321937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.321963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.337213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.337521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.337551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.352806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.353096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.368327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.368613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.368645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.383955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.384267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.384298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.399648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.399934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.399967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.150 [2024-04-24 05:26:08.415404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.150 [2024-04-24 05:26:08.415717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.150 [2024-04-24 05:26:08.415745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.431070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.431376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.431405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.446790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.447071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.447097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.462482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.462820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.478183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.478512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.493769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.494068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.494103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.508987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.509297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.509323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.524570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.524876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.524903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.540066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.540376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.540401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.555564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.555893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.571257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.571566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.571595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.586874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.587199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.587230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.602578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.602889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.602915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.618245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.618550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.618580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.633942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.634279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.649595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.649902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.649942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.409 [2024-04-24 05:26:08.665211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.409 [2024-04-24 05:26:08.665516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.409 [2024-04-24 05:26:08.665546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.667 [2024-04-24 05:26:08.680803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.667 [2024-04-24 05:26:08.681114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.667 [2024-04-24 05:26:08.681142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.667 [2024-04-24 05:26:08.696414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10a90) with pdu=0x2000190fbcf0 00:30:31.667 [2024-04-24 05:26:08.696700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.667 [2024-04-24 05:26:08.696728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:31.667 00:30:31.667 Latency(us) 00:30:31.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.667 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.667 nvme0n1 : 2.01 17576.16 68.66 0.00 0.00 7264.81 2985.53 16505.36 00:30:31.667 =================================================================================================================== 00:30:31.667 Total : 17576.16 68.66 0.00 0.00 7264.81 2985.53 16505.36 00:30:31.667 0 00:30:31.667 05:26:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:31.667 05:26:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:31.667 05:26:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:31.667 | .driver_specific 00:30:31.667 | .nvme_error 00:30:31.667 | .status_code 00:30:31.667 | .command_transient_transport_error' 00:30:31.667 05:26:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:31.926 05:26:08 -- host/digest.sh@71 -- # (( 138 > 0 )) 00:30:31.926 05:26:08 -- host/digest.sh@73 -- # killprocess 2014299 00:30:31.926 05:26:08 -- common/autotest_common.sh@936 -- # '[' -z 2014299 ']' 00:30:31.926 05:26:08 -- common/autotest_common.sh@940 -- # kill -0 2014299 00:30:31.926 05:26:08 -- common/autotest_common.sh@941 -- # uname 00:30:31.926 05:26:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:31.926 05:26:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2014299 00:30:31.926 05:26:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:31.926 05:26:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:31.926 05:26:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2014299' 00:30:31.926 killing process with pid 2014299 00:30:31.926 05:26:08 -- common/autotest_common.sh@955 -- # kill 2014299 00:30:31.926 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.926 00:30:31.926 Latency(us) 00:30:31.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.926 =================================================================================================================== 00:30:31.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.926 05:26:08 -- common/autotest_common.sh@960 -- # wait 2014299 00:30:32.186 05:26:09 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:32.186 05:26:09 -- host/digest.sh@54 -- # local rw bs qd 00:30:32.186 05:26:09 -- host/digest.sh@56 -- # rw=randwrite 00:30:32.186 05:26:09 -- host/digest.sh@56 -- # bs=131072 00:30:32.186 05:26:09 -- host/digest.sh@56 -- # qd=16 00:30:32.186 05:26:09 -- host/digest.sh@58 -- # bperfpid=2014710 00:30:32.186 05:26:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:32.186 05:26:09 -- host/digest.sh@60 -- # waitforlisten 2014710 /var/tmp/bperf.sock 00:30:32.186 05:26:09 -- common/autotest_common.sh@817 -- # '[' -z 2014710 ']' 00:30:32.186 05:26:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:32.186 05:26:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:32.186 05:26:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:32.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:32.186 05:26:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:32.186 05:26:09 -- common/autotest_common.sh@10 -- # set +x 00:30:32.186 [2024-04-24 05:26:09.263321] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:32.186 [2024-04-24 05:26:09.263411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014710 ] 00:30:32.186 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.186 Zero copy mechanism will not be used. 00:30:32.186 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.186 [2024-04-24 05:26:09.300951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:32.186 [2024-04-24 05:26:09.328804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.186 [2024-04-24 05:26:09.412493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.444 05:26:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:32.444 05:26:09 -- common/autotest_common.sh@850 -- # return 0 00:30:32.444 05:26:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.444 05:26:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.702 05:26:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:32.702 05:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.702 05:26:09 -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 05:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.702 05:26:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.702 05:26:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.961 nvme0n1 00:30:32.961 05:26:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:32.961 05:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.961 05:26:10 -- common/autotest_common.sh@10 -- # set +x 00:30:32.961 05:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.961 05:26:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:32.961 05:26:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.222 Zero copy mechanism will not be used. 00:30:33.222 Running I/O for 2 seconds... 00:30:33.222 [2024-04-24 05:26:10.307825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.308218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.308277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.318550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.318940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.318971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.330119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.330481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.330510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.342031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.342389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.342430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.355338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.355705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.355735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.367137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.367490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.367532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.380275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.380644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.380688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.391750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.222 [2024-04-24 05:26:10.392088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.222 [2024-04-24 05:26:10.392133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.222 [2024-04-24 05:26:10.403625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.403851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.403880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.415464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.415824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.426503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.426845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.426889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.437689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.438027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.438075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.449912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.450255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.450296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.461421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.461598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.473387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.473766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.473809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.223 [2024-04-24 05:26:10.485142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.223 [2024-04-24 05:26:10.485420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.223 [2024-04-24 05:26:10.485448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.497268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.497641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.497673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.509608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.509954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.509983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.520653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.521031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.521074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.533222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.533566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.545657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.546037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.546084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.558074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.558410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.558439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.570227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.570618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.570652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.582761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.583108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.583154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.593341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.593651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.593679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.605986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.606326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.606371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.617549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.617913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.617941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.629272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.629660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.629702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.641167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.641511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.641558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.653608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.654014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.665257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.665655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.677318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.677665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.677708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.689400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.689753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.689781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.700785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.701133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.701177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.712654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.713005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.713047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.725140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.725497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.725524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.482 [2024-04-24 05:26:10.737652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.482 [2024-04-24 05:26:10.738001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.482 [2024-04-24 05:26:10.738050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.483 [2024-04-24 05:26:10.749387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.483 [2024-04-24 05:26:10.749764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.483 [2024-04-24 05:26:10.749811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.761120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.761344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.761372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.772445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.772813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.772854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.784029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.784337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.784365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.795646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.795983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.796031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.807778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.808141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.808186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.819568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.819794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.819822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.831889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.832308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.832349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.845703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.846075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.846104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.858381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.858776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.858803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.870564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.870900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.870928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.883264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.883645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.883672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.895435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.895784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.895812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.906291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.906708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.906735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.919471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.919840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.919869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.931415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.931757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.931801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.942831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.943090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.954923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.955286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.955314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.966901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.967254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.967282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.979238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.979581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.979624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:10.991414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:10.991789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:10.991833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.742 [2024-04-24 05:26:11.002567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:33.742 [2024-04-24 05:26:11.002791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.742 [2024-04-24 05:26:11.002819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.014089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.014462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.014491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.025636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.025978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.026022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.037608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.037954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.037983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.049991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.050364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.061989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.062322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.073367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.073712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.073756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.085285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.085619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.097893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.098270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.098313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.109896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.110250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.121832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.122204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.122231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.133960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.134280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.134309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.145263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.145642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.145684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.156996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.157339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.157368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.168870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.169237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.169279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.182751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.183087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.183135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.195924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.196261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.196306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.207283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.207642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.207683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.219496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.219890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.219933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.231529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.232103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.232136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.244221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.244560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.244589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.256850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.257186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.257229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.002 [2024-04-24 05:26:11.268387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.002 [2024-04-24 05:26:11.268734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.002 [2024-04-24 05:26:11.268763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.280658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.280981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.262 [2024-04-24 05:26:11.281010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.291701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.292038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.262 [2024-04-24 05:26:11.292067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.303051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.303388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.262 [2024-04-24 05:26:11.303417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.314973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.315309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.262 [2024-04-24 05:26:11.315336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.326640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.326976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.262 [2024-04-24 05:26:11.327004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.262 [2024-04-24 05:26:11.338788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.262 [2024-04-24 05:26:11.339127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.339155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.350353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.350570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.350599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.362081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.362313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.362349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.374242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.374424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.374452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.386893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.387258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.387301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.399050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.399384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.399413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.410797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.411027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.423153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.423486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.423514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.435265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.435638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.435682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.447281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.447653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.447681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.459308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.459680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.459725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.471216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.471435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.471463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.482650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.482862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.482890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.494363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.494704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.494733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.506010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.506200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.506228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.517126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.517613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.517651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.263 [2024-04-24 05:26:11.528464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.263 [2024-04-24 05:26:11.528925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.263 [2024-04-24 05:26:11.528953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.524 [2024-04-24 05:26:11.539153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.524 [2024-04-24 05:26:11.539579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.524 [2024-04-24 05:26:11.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.524 [2024-04-24 05:26:11.550520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.524 [2024-04-24 05:26:11.550904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.524 [2024-04-24 05:26:11.550933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.524 [2024-04-24 05:26:11.561439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.524 [2024-04-24 05:26:11.561825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.524 [2024-04-24 05:26:11.561854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.524 [2024-04-24 05:26:11.571847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.524 [2024-04-24 05:26:11.572318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.572346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.582707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.583166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.583194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.593657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.594080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.594109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.605016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.605483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.616781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.617203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.617231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.627303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.627780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.627809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.638252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.638705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.648979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.649398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.649426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.659075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.659452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.659502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.669639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.670030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.680728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.681104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.681148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.691504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.691892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.691921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.702676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.703044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.703072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.713823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.714256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.714298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.725546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.725999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.726028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.735929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.736395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.736439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.747473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.747870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.747915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.757825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.758273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.758301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.768733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.769143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.769171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.779940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.780367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.780395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.525 [2024-04-24 05:26:11.790107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.525 [2024-04-24 05:26:11.790590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.525 [2024-04-24 05:26:11.790641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.801316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.801811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.812983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.813417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.813446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.824471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.824900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.824929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.835093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.835557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.835585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.846147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.846606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.857172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.857579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.867986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.868325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.868354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.878533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.878953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.878998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.889377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.889727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.889756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.900453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.900840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.900869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.912034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.912453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.912496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.923688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.924142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.924170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.934749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.935235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.935278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.945171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.945725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.945760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.956377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.956802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.967847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.968271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.968300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.978894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.979359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.979386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:11.990124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:11.990517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:11.990545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:12.001215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:12.001650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:12.001679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.787 [2024-04-24 05:26:12.011892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.787 [2024-04-24 05:26:12.012319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.787 [2024-04-24 05:26:12.012347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.788 [2024-04-24 05:26:12.022496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.788 [2024-04-24 05:26:12.023048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.788 [2024-04-24 05:26:12.023076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.788 [2024-04-24 05:26:12.033243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.788 [2024-04-24 05:26:12.033677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.788 [2024-04-24 05:26:12.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.788 [2024-04-24 05:26:12.044191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.788 [2024-04-24 05:26:12.044613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.788 [2024-04-24 05:26:12.044647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.788 [2024-04-24 05:26:12.054044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:34.788 [2024-04-24 05:26:12.054474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.788 [2024-04-24 05:26:12.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.049 [2024-04-24 05:26:12.064737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.049 [2024-04-24 05:26:12.065188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.049 [2024-04-24 05:26:12.065231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.049 [2024-04-24 05:26:12.076013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.049 [2024-04-24 05:26:12.076361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.076389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.087006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.087409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.097985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.098363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.098392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.109249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.109714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.120941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.121371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.121399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.131986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.132441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.132469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.142759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.143137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.143166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.153532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.154027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.164240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.164594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.164623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.174976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.175322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.175351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.186016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.186435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.186464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.196476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.196862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.196891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.206993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.207423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.207465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.217965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.218273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.218301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.050 [2024-04-24 05:26:12.228192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.050 [2024-04-24 05:26:12.228586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.050 [2024-04-24 05:26:12.228621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.238375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.238765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.238793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.249486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.249870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.249899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.260907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.261249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.261277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.271247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.271648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.271676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.281875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.282208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.282236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.051 [2024-04-24 05:26:12.292779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb10dd0) with pdu=0x2000190fef90 00:30:35.051 [2024-04-24 05:26:12.293276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.051 [2024-04-24 05:26:12.293305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.051 00:30:35.051 Latency(us) 00:30:35.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.051 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:35.051 nvme0n1 : 2.01 2683.59 335.45 0.00 0.00 5949.39 4344.79 14078.10 00:30:35.051 =================================================================================================================== 00:30:35.051 Total : 2683.59 335.45 0.00 0.00 5949.39 4344.79 14078.10 00:30:35.051 0 00:30:35.051 05:26:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:35.051 05:26:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:35.051 05:26:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:35.051 | .driver_specific 00:30:35.051 | .nvme_error 00:30:35.051 | .status_code 00:30:35.051 | .command_transient_transport_error' 00:30:35.051 05:26:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:35.310 05:26:12 -- host/digest.sh@71 -- # (( 173 > 0 )) 00:30:35.310 05:26:12 -- host/digest.sh@73 -- # killprocess 2014710 00:30:35.310 05:26:12 -- common/autotest_common.sh@936 -- # '[' -z 2014710 ']' 00:30:35.310 05:26:12 -- common/autotest_common.sh@940 -- # kill -0 2014710 00:30:35.310 05:26:12 -- common/autotest_common.sh@941 -- # uname 00:30:35.310 05:26:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:35.310 05:26:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2014710 00:30:35.568 05:26:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:35.568 05:26:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:35.568 05:26:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2014710' 00:30:35.568 killing process with pid 2014710 00:30:35.568 05:26:12 -- common/autotest_common.sh@955 -- # kill 2014710 00:30:35.568 Received shutdown signal, test time was about 2.000000 seconds 00:30:35.568 00:30:35.568 Latency(us) 00:30:35.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.568 =================================================================================================================== 00:30:35.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.568 05:26:12 -- common/autotest_common.sh@960 -- # wait 2014710 00:30:35.568 05:26:12 -- host/digest.sh@116 -- # killprocess 2013346 00:30:35.568 05:26:12 -- common/autotest_common.sh@936 -- # '[' -z 2013346 ']' 00:30:35.568 05:26:12 -- common/autotest_common.sh@940 -- # kill -0 2013346 00:30:35.568 05:26:12 -- common/autotest_common.sh@941 -- # uname 00:30:35.568 05:26:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:35.568 05:26:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2013346 00:30:35.827 05:26:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:35.827 05:26:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:35.827 05:26:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2013346' 00:30:35.827 killing process with pid 2013346 00:30:35.827 05:26:12 -- common/autotest_common.sh@955 -- # kill 2013346 00:30:35.827 05:26:12 -- common/autotest_common.sh@960 -- # wait 2013346 00:30:35.827 00:30:35.827 real 0m15.381s 00:30:35.827 user 0m30.789s 00:30:35.827 sys 0m3.944s 00:30:35.827 05:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:35.827 05:26:13 -- common/autotest_common.sh@10 -- # set +x 00:30:35.827 ************************************ 00:30:35.827 END TEST nvmf_digest_error 00:30:35.827 ************************************ 00:30:36.087 05:26:13 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:36.087 05:26:13 -- host/digest.sh@150 -- # nvmftestfini 00:30:36.087 05:26:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:36.087 05:26:13 -- nvmf/common.sh@117 -- # sync 00:30:36.087 05:26:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.087 05:26:13 -- nvmf/common.sh@120 -- # set +e 00:30:36.087 05:26:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.087 05:26:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.087 rmmod nvme_tcp 00:30:36.087 rmmod nvme_fabrics 00:30:36.087 rmmod nvme_keyring 00:30:36.087 05:26:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.087 05:26:13 -- nvmf/common.sh@124 -- # set -e 00:30:36.087 05:26:13 -- nvmf/common.sh@125 -- # return 0 00:30:36.087 05:26:13 -- nvmf/common.sh@478 -- # '[' -n 2013346 ']' 00:30:36.087 05:26:13 -- nvmf/common.sh@479 -- # killprocess 2013346 00:30:36.087 05:26:13 -- common/autotest_common.sh@936 -- # '[' -z 2013346 ']' 00:30:36.087 05:26:13 -- common/autotest_common.sh@940 -- # kill -0 2013346 00:30:36.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2013346) - No such process 00:30:36.087 05:26:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2013346 is not found' 00:30:36.087 Process with pid 2013346 is not found 00:30:36.087 05:26:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:36.087 05:26:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:36.087 05:26:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:36.087 05:26:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:36.087 05:26:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:36.087 05:26:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.087 05:26:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.087 05:26:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.990 05:26:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:37.990 00:30:37.990 real 0m34.892s 00:30:37.990 user 1m1.769s 00:30:37.990 sys 0m9.389s 00:30:37.990 05:26:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:37.990 05:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:37.990 ************************************ 00:30:37.990 END TEST nvmf_digest 00:30:37.990 ************************************ 00:30:37.990 05:26:15 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:30:37.990 05:26:15 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:30:37.990 05:26:15 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:30:37.990 05:26:15 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:37.990 05:26:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:37.990 05:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:37.990 05:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:38.250 ************************************ 00:30:38.250 START TEST nvmf_bdevperf 00:30:38.250 ************************************ 00:30:38.250 05:26:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:38.250 * Looking for test storage... 00:30:38.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.250 05:26:15 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.250 05:26:15 -- nvmf/common.sh@7 -- # uname -s 00:30:38.250 05:26:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.250 05:26:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.250 05:26:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.250 05:26:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.250 05:26:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.250 05:26:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.250 05:26:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.250 05:26:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.250 05:26:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.250 05:26:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.250 05:26:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.250 05:26:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.250 05:26:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.250 05:26:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.250 05:26:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.250 05:26:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.250 05:26:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.250 05:26:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.250 05:26:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.250 05:26:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.250 05:26:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.250 05:26:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.250 05:26:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.250 05:26:15 -- paths/export.sh@5 -- # export PATH 00:30:38.250 05:26:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.250 05:26:15 -- nvmf/common.sh@47 -- # : 0 00:30:38.250 05:26:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.250 05:26:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.250 05:26:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.250 05:26:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.250 05:26:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.250 05:26:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.250 05:26:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.250 05:26:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.250 05:26:15 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.250 05:26:15 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.250 05:26:15 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:38.250 05:26:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:38.250 05:26:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.250 05:26:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:38.250 05:26:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:38.250 05:26:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:38.250 05:26:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.250 05:26:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.250 05:26:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.250 05:26:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:38.250 05:26:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:38.250 05:26:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.250 05:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:40.151 05:26:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:40.151 05:26:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:40.151 05:26:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:40.151 05:26:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:40.151 05:26:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:40.151 05:26:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:40.151 05:26:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:40.151 05:26:17 -- nvmf/common.sh@295 -- # net_devs=() 00:30:40.151 05:26:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:40.151 05:26:17 -- nvmf/common.sh@296 -- # e810=() 00:30:40.151 05:26:17 -- nvmf/common.sh@296 -- # local -ga e810 00:30:40.151 05:26:17 -- nvmf/common.sh@297 -- # x722=() 00:30:40.151 05:26:17 -- nvmf/common.sh@297 -- # local -ga x722 00:30:40.151 05:26:17 -- nvmf/common.sh@298 -- # mlx=() 00:30:40.151 05:26:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:40.151 05:26:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.151 05:26:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:40.151 05:26:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:40.151 05:26:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.151 05:26:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:40.151 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:40.151 05:26:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.151 05:26:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:40.151 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:40.151 05:26:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.151 05:26:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.151 05:26:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.151 05:26:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:40.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:40.151 05:26:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.151 05:26:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.151 05:26:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.151 05:26:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.151 05:26:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:40.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:40.151 05:26:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.151 05:26:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:40.151 05:26:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:40.151 05:26:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:40.151 05:26:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.151 05:26:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.151 05:26:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.151 05:26:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:40.151 05:26:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.152 05:26:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.152 05:26:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:40.152 05:26:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.152 05:26:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.152 05:26:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:40.152 05:26:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:40.152 05:26:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.152 05:26:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.152 05:26:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.152 05:26:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.152 05:26:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:40.152 05:26:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.411 05:26:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.411 05:26:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.411 05:26:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:40.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:30:40.411 00:30:40.411 --- 10.0.0.2 ping statistics --- 00:30:40.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.411 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:40.411 05:26:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:30:40.411 00:30:40.411 --- 10.0.0.1 ping statistics --- 00:30:40.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.411 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:30:40.411 05:26:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.411 05:26:17 -- nvmf/common.sh@411 -- # return 0 00:30:40.411 05:26:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:40.411 05:26:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.411 05:26:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:40.411 05:26:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:40.411 05:26:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.411 05:26:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:40.411 05:26:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:40.411 05:26:17 -- host/bdevperf.sh@25 -- # tgt_init 00:30:40.411 05:26:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:40.411 05:26:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:40.411 05:26:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:40.411 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.411 05:26:17 -- nvmf/common.sh@470 -- # nvmfpid=2017069 00:30:40.411 05:26:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:40.411 05:26:17 -- nvmf/common.sh@471 -- # waitforlisten 2017069 00:30:40.411 05:26:17 -- common/autotest_common.sh@817 -- # '[' -z 2017069 ']' 00:30:40.411 05:26:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.411 05:26:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:40.411 05:26:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.411 05:26:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:40.411 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.411 [2024-04-24 05:26:17.531579] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:40.411 [2024-04-24 05:26:17.531693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.411 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.411 [2024-04-24 05:26:17.570066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:40.411 [2024-04-24 05:26:17.596126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.669 [2024-04-24 05:26:17.682051] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.669 [2024-04-24 05:26:17.682131] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.669 [2024-04-24 05:26:17.682151] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.669 [2024-04-24 05:26:17.682163] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.669 [2024-04-24 05:26:17.682190] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.669 [2024-04-24 05:26:17.682350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.669 [2024-04-24 05:26:17.682414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.669 [2024-04-24 05:26:17.682416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.669 05:26:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:40.669 05:26:17 -- common/autotest_common.sh@850 -- # return 0 00:30:40.669 05:26:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:40.669 05:26:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 05:26:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.669 05:26:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.669 05:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 [2024-04-24 05:26:17.816355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.669 05:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.669 05:26:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.669 05:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 Malloc0 00:30:40.669 05:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.669 05:26:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.669 05:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 05:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.669 05:26:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.669 05:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 05:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.669 05:26:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.669 05:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.669 05:26:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.669 [2024-04-24 05:26:17.877225] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.669 05:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.669 05:26:17 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:40.669 05:26:17 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:40.669 05:26:17 -- nvmf/common.sh@521 -- # config=() 00:30:40.669 05:26:17 -- nvmf/common.sh@521 -- # local subsystem config 00:30:40.669 05:26:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:40.669 05:26:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:40.669 { 00:30:40.669 "params": { 00:30:40.669 "name": "Nvme$subsystem", 00:30:40.669 "trtype": "$TEST_TRANSPORT", 00:30:40.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.669 "adrfam": "ipv4", 00:30:40.669 "trsvcid": "$NVMF_PORT", 00:30:40.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.669 "hdgst": ${hdgst:-false}, 00:30:40.669 "ddgst": ${ddgst:-false} 00:30:40.669 }, 00:30:40.669 "method": "bdev_nvme_attach_controller" 00:30:40.669 } 00:30:40.669 EOF 00:30:40.669 )") 00:30:40.669 05:26:17 -- nvmf/common.sh@543 -- # cat 00:30:40.669 05:26:17 -- nvmf/common.sh@545 -- # jq . 00:30:40.669 05:26:17 -- nvmf/common.sh@546 -- # IFS=, 00:30:40.669 05:26:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:40.669 "params": { 00:30:40.669 "name": "Nvme1", 00:30:40.669 "trtype": "tcp", 00:30:40.669 "traddr": "10.0.0.2", 00:30:40.669 "adrfam": "ipv4", 00:30:40.669 "trsvcid": "4420", 00:30:40.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.669 "hdgst": false, 00:30:40.669 "ddgst": false 00:30:40.669 }, 00:30:40.669 "method": "bdev_nvme_attach_controller" 00:30:40.669 }' 00:30:40.669 [2024-04-24 05:26:17.920156] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:40.669 [2024-04-24 05:26:17.920244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017216 ] 00:30:40.926 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.926 [2024-04-24 05:26:17.953472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:40.926 [2024-04-24 05:26:17.981869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.926 [2024-04-24 05:26:18.065870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.183 Running I/O for 1 seconds... 00:30:42.118 00:30:42.118 Latency(us) 00:30:42.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.118 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:42.118 Verification LBA range: start 0x0 length 0x4000 00:30:42.118 Nvme1n1 : 1.01 8358.21 32.65 0.00 0.00 15257.61 3301.07 14660.65 00:30:42.118 =================================================================================================================== 00:30:42.118 Total : 8358.21 32.65 0.00 0.00 15257.61 3301.07 14660.65 00:30:42.392 05:26:19 -- host/bdevperf.sh@30 -- # bdevperfpid=2017353 00:30:42.392 05:26:19 -- host/bdevperf.sh@32 -- # sleep 3 00:30:42.392 05:26:19 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:42.392 05:26:19 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:42.392 05:26:19 -- nvmf/common.sh@521 -- # config=() 00:30:42.392 05:26:19 -- nvmf/common.sh@521 -- # local subsystem config 00:30:42.392 05:26:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:42.392 05:26:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:42.392 { 00:30:42.392 "params": { 00:30:42.392 "name": "Nvme$subsystem", 00:30:42.392 "trtype": "$TEST_TRANSPORT", 00:30:42.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.392 "adrfam": "ipv4", 00:30:42.392 "trsvcid": "$NVMF_PORT", 00:30:42.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.392 "hdgst": ${hdgst:-false}, 00:30:42.392 "ddgst": ${ddgst:-false} 00:30:42.392 }, 00:30:42.392 "method": "bdev_nvme_attach_controller" 00:30:42.392 } 00:30:42.392 EOF 00:30:42.392 )") 00:30:42.392 05:26:19 -- nvmf/common.sh@543 -- # cat 00:30:42.392 05:26:19 -- nvmf/common.sh@545 -- # jq . 00:30:42.392 05:26:19 -- nvmf/common.sh@546 -- # IFS=, 00:30:42.392 05:26:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:42.392 "params": { 00:30:42.392 "name": "Nvme1", 00:30:42.392 "trtype": "tcp", 00:30:42.392 "traddr": "10.0.0.2", 00:30:42.392 "adrfam": "ipv4", 00:30:42.392 "trsvcid": "4420", 00:30:42.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.392 "hdgst": false, 00:30:42.392 "ddgst": false 00:30:42.392 }, 00:30:42.392 "method": "bdev_nvme_attach_controller" 00:30:42.392 }' 00:30:42.392 [2024-04-24 05:26:19.640527] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:42.392 [2024-04-24 05:26:19.640619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017353 ] 00:30:42.665 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.665 [2024-04-24 05:26:19.682508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:42.665 [2024-04-24 05:26:19.711036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.665 [2024-04-24 05:26:19.794782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.922 Running I/O for 15 seconds... 00:30:45.459 05:26:22 -- host/bdevperf.sh@33 -- # kill -9 2017069 00:30:45.459 05:26:22 -- host/bdevperf.sh@35 -- # sleep 3 00:30:45.459 [2024-04-24 05:26:22.611837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.611889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.611933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.611963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.611984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.459 [2024-04-24 05:26:22.612249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.459 [2024-04-24 05:26:22.612267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.612453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.612998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.613564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.613963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.613976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.460 [2024-04-24 05:26:22.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.460 [2024-04-24 05:26:22.614428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.460 [2024-04-24 05:26:22.614444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.614824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.614853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.614882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.614928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.614960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.614977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.614992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.461 [2024-04-24 05:26:22.615124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.615966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.615981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.616025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.616056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.616088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.616120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.461 [2024-04-24 05:26:22.616153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37cc0 is same with the state(5) to be set 00:30:45.461 [2024-04-24 05:26:22.616188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.461 [2024-04-24 05:26:22.616200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.461 [2024-04-24 05:26:22.616213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:30:45.461 [2024-04-24 05:26:22.616230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.461 [2024-04-24 05:26:22.616306] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c37cc0 was disconnected and freed. reset controller. 00:30:45.461 [2024-04-24 05:26:22.620138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.461 [2024-04-24 05:26:22.620218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.461 [2024-04-24 05:26:22.620879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.621093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.621121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.461 [2024-04-24 05:26:22.621139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.461 [2024-04-24 05:26:22.621378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.461 [2024-04-24 05:26:22.621621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.461 [2024-04-24 05:26:22.621654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.461 [2024-04-24 05:26:22.621698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.461 [2024-04-24 05:26:22.625287] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.461 [2024-04-24 05:26:22.634318] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.461 [2024-04-24 05:26:22.634792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.634987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.635017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.461 [2024-04-24 05:26:22.635035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.461 [2024-04-24 05:26:22.635273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.461 [2024-04-24 05:26:22.635515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.461 [2024-04-24 05:26:22.635540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.461 [2024-04-24 05:26:22.635557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.461 [2024-04-24 05:26:22.639110] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.461 [2024-04-24 05:26:22.648139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.461 [2024-04-24 05:26:22.648569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.648774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.461 [2024-04-24 05:26:22.648805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.461 [2024-04-24 05:26:22.648824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.461 [2024-04-24 05:26:22.649061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.461 [2024-04-24 05:26:22.649304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.649329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.649345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.652910] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.462 [2024-04-24 05:26:22.662160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.462 [2024-04-24 05:26:22.662566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.662773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.662804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.462 [2024-04-24 05:26:22.662823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.462 [2024-04-24 05:26:22.663061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.462 [2024-04-24 05:26:22.663303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.663328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.663344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.666906] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.462 [2024-04-24 05:26:22.676141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.462 [2024-04-24 05:26:22.676569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.676718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.676748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.462 [2024-04-24 05:26:22.676766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.462 [2024-04-24 05:26:22.677002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.462 [2024-04-24 05:26:22.677243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.677268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.677284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.680848] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.462 [2024-04-24 05:26:22.690090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.462 [2024-04-24 05:26:22.690527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.690741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.690767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.462 [2024-04-24 05:26:22.690784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.462 [2024-04-24 05:26:22.691021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.462 [2024-04-24 05:26:22.691275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.691300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.691316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.694878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.462 [2024-04-24 05:26:22.703909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.462 [2024-04-24 05:26:22.704347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.704519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.704547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.462 [2024-04-24 05:26:22.704565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.462 [2024-04-24 05:26:22.704816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.462 [2024-04-24 05:26:22.705057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.705081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.705097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.708658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.462 [2024-04-24 05:26:22.717893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.462 [2024-04-24 05:26:22.718327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.718495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.462 [2024-04-24 05:26:22.718523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.462 [2024-04-24 05:26:22.718541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.462 [2024-04-24 05:26:22.718791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.462 [2024-04-24 05:26:22.719033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.462 [2024-04-24 05:26:22.719057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.462 [2024-04-24 05:26:22.719073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.462 [2024-04-24 05:26:22.722638] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.723 [2024-04-24 05:26:22.731884] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.723 [2024-04-24 05:26:22.732456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.732727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.732756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.723 [2024-04-24 05:26:22.732776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.723 [2024-04-24 05:26:22.733014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.723 [2024-04-24 05:26:22.733255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.723 [2024-04-24 05:26:22.733280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.723 [2024-04-24 05:26:22.733296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.723 [2024-04-24 05:26:22.736866] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.723 [2024-04-24 05:26:22.745891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.723 [2024-04-24 05:26:22.746280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.746501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.746542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.723 [2024-04-24 05:26:22.746558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.723 [2024-04-24 05:26:22.746842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.723 [2024-04-24 05:26:22.747087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.723 [2024-04-24 05:26:22.747112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.723 [2024-04-24 05:26:22.747128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.723 [2024-04-24 05:26:22.750691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.723 [2024-04-24 05:26:22.759729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.723 [2024-04-24 05:26:22.760141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.760335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.723 [2024-04-24 05:26:22.760364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.723 [2024-04-24 05:26:22.760385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.723 [2024-04-24 05:26:22.760622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.723 [2024-04-24 05:26:22.760875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.760898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.760914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.764457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.773684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.774095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.774247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.774275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.774293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.774531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.774789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.774812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.774826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.778421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.787534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.787957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.788201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.788248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.788273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.788512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.788774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.788797] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.788811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.792402] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.801516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.801944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.802118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.802146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.802164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.802401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.802656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.802691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.802707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.806267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.815508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.815899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.816173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.816223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.816242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.816481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.816735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.816760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.816777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.820329] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.829356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.829948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.830172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.830199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.830218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.830462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.830718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.830744] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.830760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.834315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.843341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.843769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.843957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.843982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.843998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.844259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.844502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.844527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.844544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.848110] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.857348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.857785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.857959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.857988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.858006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.858244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.858487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.858512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.858528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.862104] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.871104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.871507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.871654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.871682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.871699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.871913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.872151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.872174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.872188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.875625] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.884994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.885430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.885601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.724 [2024-04-24 05:26:22.885641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.724 [2024-04-24 05:26:22.885662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.724 [2024-04-24 05:26:22.885900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.724 [2024-04-24 05:26:22.886141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.724 [2024-04-24 05:26:22.886165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.724 [2024-04-24 05:26:22.886180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.724 [2024-04-24 05:26:22.889745] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.724 [2024-04-24 05:26:22.898982] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.724 [2024-04-24 05:26:22.899384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.899553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.899581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.899599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.899848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.900091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.900115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.900130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.903695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.912926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.913484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.913679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.913708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.913725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.913962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.914203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.914233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.914250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.917817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.926850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.927254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.927595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.927614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.927861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.928104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.928129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.928146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.931708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.940744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.941180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.941424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.941489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.941508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.941760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.942002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.942026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.942043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.945599] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.954633] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.955058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.955257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.955285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.955303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.955541] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.955796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.955821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.955843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.959398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.968635] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.969074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.969267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.969333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.969352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.969590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.969845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.969871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.969888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.973443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.725 [2024-04-24 05:26:22.982477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.725 [2024-04-24 05:26:22.982894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.983093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.725 [2024-04-24 05:26:22.983118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.725 [2024-04-24 05:26:22.983135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.725 [2024-04-24 05:26:22.983400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.725 [2024-04-24 05:26:22.983656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.725 [2024-04-24 05:26:22.983682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.725 [2024-04-24 05:26:22.983699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.725 [2024-04-24 05:26:22.987252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:22.996497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:22.996915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:22.997089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:22.997116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:22.997134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:22.997371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:22.997612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:22.997650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:22.997667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.001229] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.010487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.010929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.011163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.011190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.011224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.011462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.011715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.011740] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.011756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.015312] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.024338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.024772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.024943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.024972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.024990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.025227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.025469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.025493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.025509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.029074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.038312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.038738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.038888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.038917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.038935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.039173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.039413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.039438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.039454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.043014] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.052255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.052790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.053109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.053169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.053187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.053425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.053686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.053712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.053735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.057298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.066112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.066536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.066712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.066744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.066762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.067000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.067244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.067269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.067286] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.070856] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.080088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.080491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.080687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.080717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.080736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.080975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.081217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.081241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.081257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.084818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.094055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.094501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.094669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.094700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.094719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.094958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.985 [2024-04-24 05:26:23.095199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.985 [2024-04-24 05:26:23.095222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.985 [2024-04-24 05:26:23.095238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.985 [2024-04-24 05:26:23.098800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.985 [2024-04-24 05:26:23.108038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.985 [2024-04-24 05:26:23.108441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.108649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.985 [2024-04-24 05:26:23.108676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.985 [2024-04-24 05:26:23.108693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.985 [2024-04-24 05:26:23.108954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.109196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.109220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.109235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.112804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.121955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.122384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.122518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.122546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.122563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.122812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.123053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.123078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.123094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.126660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.135909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.136343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.136525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.136555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.136573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.136827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.137071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.137096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.137113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.140690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.149739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.150178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.150477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.150539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.150557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.150808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.151050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.151075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.151091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.154663] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.163695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.164187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.164407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.164441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.164457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.164722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.164964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.164987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.165003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.168558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.177592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.178070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.178233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.178275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.178298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.178537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.178789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.178814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.178829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.182386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.191427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.191836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.192149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.192208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.192226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.192464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.192731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.192756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.192771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.196329] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.205372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.205781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.206049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.206101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.206119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.206357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.206599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.206623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.206650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.210222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.219261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.219688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.219882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.219911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.219929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.220173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.220415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.220438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.220454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.224026] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.233288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.986 [2024-04-24 05:26:23.233713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.233881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.986 [2024-04-24 05:26:23.233910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.986 [2024-04-24 05:26:23.233928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.986 [2024-04-24 05:26:23.234166] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.986 [2024-04-24 05:26:23.234407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.986 [2024-04-24 05:26:23.234431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.986 [2024-04-24 05:26:23.234447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.986 [2024-04-24 05:26:23.238007] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.986 [2024-04-24 05:26:23.247244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.987 [2024-04-24 05:26:23.247742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.987 [2024-04-24 05:26:23.247884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.987 [2024-04-24 05:26:23.247914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:45.987 [2024-04-24 05:26:23.247932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:45.987 [2024-04-24 05:26:23.248170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:45.987 [2024-04-24 05:26:23.248411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.987 [2024-04-24 05:26:23.248435] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.987 [2024-04-24 05:26:23.248451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.987 [2024-04-24 05:26:23.252012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.261258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.261685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.261946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.261972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.261989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.262227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.262481] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.262505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.262521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.266081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.275109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.275504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.275670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.275700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.275719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.275956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.276197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.276221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.276236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.279799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.289040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.289477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.289645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.289675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.289694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.289930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.290172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.290196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.290212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.293773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.303020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.303455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.303636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.303666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.303685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.303922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.304164] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.304193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.304210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.307778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.317027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.317460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.317639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.317667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.317685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.317922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.318164] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.318189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.318205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.321773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.331020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.331432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.331621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.331659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.331677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.331915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.332156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.246 [2024-04-24 05:26:23.332180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.246 [2024-04-24 05:26:23.332196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.246 [2024-04-24 05:26:23.335765] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.246 [2024-04-24 05:26:23.345010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.246 [2024-04-24 05:26:23.345536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.345739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.246 [2024-04-24 05:26:23.345768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.246 [2024-04-24 05:26:23.345794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.246 [2024-04-24 05:26:23.346031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.246 [2024-04-24 05:26:23.346275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.346298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.346319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.349884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.358933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.359495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.359688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.359717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.359744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.359982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.360222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.360246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.360262] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.363832] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.372772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.373298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.373651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.373714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.373733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.373970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.374212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.374237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.374253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.377818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.386641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.387067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.387343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.387373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.387391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.387642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.387885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.387910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.387926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.391488] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.400519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.400955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.401151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.401179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.401197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.401436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.401692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.401718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.401735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.405286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.414519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.414953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.415118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.415147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.415164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.415401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.415654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.415680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.415696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.419252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.428491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.428949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.429121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.429151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.429169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.429406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.429660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.429685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.429700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.433256] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.442492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.442911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.443244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.443296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.443315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.443553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.443808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.443834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.443850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.447406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.456437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.456858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.457027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.457057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.457075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.457313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.457554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.457579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.457595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.461159] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.470394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.470897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.471049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.471078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.471096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.471334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.471574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.471598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.471614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.475184] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.484209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.484644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.484803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.484832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.484849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.485087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.485327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.485352] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.485369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.488937] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.498173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.498577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.498749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.498778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.498796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.499035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.499276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.499300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.499316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.247 [2024-04-24 05:26:23.502882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.247 [2024-04-24 05:26:23.512129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.247 [2024-04-24 05:26:23.512555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.512733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.247 [2024-04-24 05:26:23.512763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.247 [2024-04-24 05:26:23.512782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.247 [2024-04-24 05:26:23.513019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.247 [2024-04-24 05:26:23.513260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.247 [2024-04-24 05:26:23.513285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.247 [2024-04-24 05:26:23.513300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.516870] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.526114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.526523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.526728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.526758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.526775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.527012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.506 [2024-04-24 05:26:23.527254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.506 [2024-04-24 05:26:23.527279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.506 [2024-04-24 05:26:23.527296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.530864] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.540102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.540530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.540669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.540698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.540716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.540954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.506 [2024-04-24 05:26:23.541195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.506 [2024-04-24 05:26:23.541219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.506 [2024-04-24 05:26:23.541236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.544801] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.554034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.554459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.554654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.554683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.554701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.554938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.506 [2024-04-24 05:26:23.555178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.506 [2024-04-24 05:26:23.555203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.506 [2024-04-24 05:26:23.555220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.558793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.567867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.568295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.568492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.568521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.568545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.568797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.506 [2024-04-24 05:26:23.569040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.506 [2024-04-24 05:26:23.569064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.506 [2024-04-24 05:26:23.569080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.572647] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.581693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.582120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.582413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.582463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.582481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.582733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.506 [2024-04-24 05:26:23.582975] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.506 [2024-04-24 05:26:23.582998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.506 [2024-04-24 05:26:23.583014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.506 [2024-04-24 05:26:23.586585] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.506 [2024-04-24 05:26:23.595613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.506 [2024-04-24 05:26:23.596002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.596230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.506 [2024-04-24 05:26:23.596277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.506 [2024-04-24 05:26:23.596296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.506 [2024-04-24 05:26:23.596533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.596783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.596807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.596823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.600373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.609603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.610050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.610266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.610314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.610332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.610574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.610826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.610851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.610866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.614414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.623438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.623846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.624075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.624122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.624140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.624378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.624621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.624656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.624678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.628247] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.637293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.637719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.637894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.637923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.637942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.638180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.638423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.638448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.638464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.642028] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.651376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.651794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.651932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.651959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.651977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.652221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.652461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.652487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.652503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.656074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.665301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.665736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.665903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.665930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.665948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.666185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.666428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.666453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.666470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.670032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.679260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.679697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.679862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.679890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.679908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.680145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.680387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.680412] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.680429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.683993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.693216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.693643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.693840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.693867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.693884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.694122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.694371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.694396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.694413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.697974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.707203] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.707633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.707805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.707833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.707850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.708088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.708330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.708353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.708369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.711932] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.507 [2024-04-24 05:26:23.721156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.507 [2024-04-24 05:26:23.721679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.721853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.507 [2024-04-24 05:26:23.721880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.507 [2024-04-24 05:26:23.721898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.507 [2024-04-24 05:26:23.722134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.507 [2024-04-24 05:26:23.722377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.507 [2024-04-24 05:26:23.722402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.507 [2024-04-24 05:26:23.722418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.507 [2024-04-24 05:26:23.725982] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.508 [2024-04-24 05:26:23.735005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.508 [2024-04-24 05:26:23.735405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.735568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.735596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.508 [2024-04-24 05:26:23.735613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.508 [2024-04-24 05:26:23.735860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.508 [2024-04-24 05:26:23.736101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.508 [2024-04-24 05:26:23.736125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.508 [2024-04-24 05:26:23.736148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.508 [2024-04-24 05:26:23.739710] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.508 [2024-04-24 05:26:23.748937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.508 [2024-04-24 05:26:23.749369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.749514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.749542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.508 [2024-04-24 05:26:23.749559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.508 [2024-04-24 05:26:23.749807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.508 [2024-04-24 05:26:23.750049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.508 [2024-04-24 05:26:23.750073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.508 [2024-04-24 05:26:23.750089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.508 [2024-04-24 05:26:23.753649] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.508 [2024-04-24 05:26:23.762879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.508 [2024-04-24 05:26:23.763282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.763480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.508 [2024-04-24 05:26:23.763508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.508 [2024-04-24 05:26:23.763525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.508 [2024-04-24 05:26:23.763775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.508 [2024-04-24 05:26:23.764016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.508 [2024-04-24 05:26:23.764041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.508 [2024-04-24 05:26:23.764057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.508 [2024-04-24 05:26:23.767611] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.776862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.777275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.777458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.777488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.777506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.777764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.778005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.778029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.778050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.781614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.790868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.791305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.791474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.791503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.791521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.791769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.792011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.792035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.792051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.795610] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.804854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.805261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.805434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.805462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.805479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.805730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.805972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.805997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.806013] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.809566] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.818815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.819216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.819387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.819416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.819433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.819829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.820074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.820099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.820115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.823678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.832709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.833136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.833317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.833345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.833362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.833600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.833849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.833874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.833890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.837447] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.846685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.847069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.847244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.847272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.766 [2024-04-24 05:26:23.847290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.766 [2024-04-24 05:26:23.847528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.766 [2024-04-24 05:26:23.847783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.766 [2024-04-24 05:26:23.847809] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.766 [2024-04-24 05:26:23.847825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.766 [2024-04-24 05:26:23.851379] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.766 [2024-04-24 05:26:23.860620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.766 [2024-04-24 05:26:23.861053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.766 [2024-04-24 05:26:23.861198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.861227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.861245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.861482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.861733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.861758] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.861774] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.865327] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.874562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.875015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.875188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.875217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.875235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.875473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.875727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.875753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.875770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.879322] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.888549] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.888983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.889151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.889181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.889199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.889437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.889690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.889715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.889731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.893286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.902521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.902951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.903113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.903143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.903161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.903399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.903652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.903678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.903694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.907245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.916480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.916887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.917064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.917094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.917112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.917350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.917592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.917617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.917644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.921201] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.930430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.930813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.930979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.931008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.931025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.931263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.931505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.931529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.931546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.935109] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.944340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.944777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.944926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.944955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.944973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.945211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.945452] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.945477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.945493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.949056] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.958290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.958701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.958883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.958911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.958934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.959173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.959415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.959440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.959456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.963019] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.972248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.972675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.972846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.972874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.972891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.973129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.973371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.973396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.973412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.976977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:23.986203] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:23.986638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.986837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:23.986865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:23.986884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:23.987122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:23.987364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:23.987388] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:23.987404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:23.990968] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:24.000197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:24.000596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.000783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.000812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:24.000835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:24.001075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:24.001318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:24.001342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:24.001358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:24.004919] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:24.014150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:24.014586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.014767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.014796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:24.014814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:24.015051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:24.015292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:24.015316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:24.015333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:24.018893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.767 [2024-04-24 05:26:24.028129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.767 [2024-04-24 05:26:24.028540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.028690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.767 [2024-04-24 05:26:24.028721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:46.767 [2024-04-24 05:26:24.028740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:46.767 [2024-04-24 05:26:24.028978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:46.767 [2024-04-24 05:26:24.029219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.767 [2024-04-24 05:26:24.029244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.767 [2024-04-24 05:26:24.029260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.767 [2024-04-24 05:26:24.032824] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.042061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.042490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.042687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.042718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.042736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.042980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.043223] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.043247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.043263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.046824] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.056056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.056482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.056707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.056726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.056963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.057206] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.057231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.057248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.060811] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.070046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.070472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.070623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.070662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.070681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.070920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.071163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.071188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.071204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.074768] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.083998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.084435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.084580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.084609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.084636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.084876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.085124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.085150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.085166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.088730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.097966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.098394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.098567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.098597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.098615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.098863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.099107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.099132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.099147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.102707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.111935] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.112341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.112537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.112564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.112582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.112831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.113072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.113096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.113111] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.116671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.125904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.126309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.126482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.126513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.126532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.126780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.127024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.127053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.127071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.130624] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.139860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.140261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.140456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.140484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.140502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.140780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.141024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.141048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.141065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.144620] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.153856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.154482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.154512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.154530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.154777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.155019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.155043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.155058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.158617] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.167857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.168283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.168419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.168449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.168468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.168715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.168956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.168979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.169000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.172558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.026 [2024-04-24 05:26:24.181791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.026 [2024-04-24 05:26:24.182219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.182414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-04-24 05:26:24.182442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.026 [2024-04-24 05:26:24.182460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.026 [2024-04-24 05:26:24.182710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.026 [2024-04-24 05:26:24.182952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.026 [2024-04-24 05:26:24.182976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.026 [2024-04-24 05:26:24.182992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.026 [2024-04-24 05:26:24.186542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.195780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.196219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.196411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.196439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.196458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.196705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.196947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.196971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.196988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.200540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.209789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.210189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.210356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.210386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.210403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.210654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.210895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.210919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.210936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.214507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.223753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.224180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.224352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.224379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.224396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.224644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.224896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.224920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.224937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.228490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.237738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.238161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.238329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.238358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.238376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.238613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.238864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.238888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.238903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.242455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.251695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.252116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.252285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.252314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.252332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.252568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.252818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.252842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.252859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.256417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.265656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.266085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.266279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.266307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.266325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.266561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.266811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.266835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.266852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.270402] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.279641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.280072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.280248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.280276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.280293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.280530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.280780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.280805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.280821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.027 [2024-04-24 05:26:24.284375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.027 [2024-04-24 05:26:24.293652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.027 [2024-04-24 05:26:24.294063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.294263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-04-24 05:26:24.294291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.027 [2024-04-24 05:26:24.294309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.027 [2024-04-24 05:26:24.294546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.027 [2024-04-24 05:26:24.294798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.027 [2024-04-24 05:26:24.294823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.027 [2024-04-24 05:26:24.294839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.286 [2024-04-24 05:26:24.298393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.286 [2024-04-24 05:26:24.307634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.286 [2024-04-24 05:26:24.308058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.308237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.308266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.286 [2024-04-24 05:26:24.308283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.286 [2024-04-24 05:26:24.308520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.286 [2024-04-24 05:26:24.308773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.286 [2024-04-24 05:26:24.308798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.286 [2024-04-24 05:26:24.308814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.286 [2024-04-24 05:26:24.312365] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.286 [2024-04-24 05:26:24.321614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.286 [2024-04-24 05:26:24.322047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.322207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.322235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.286 [2024-04-24 05:26:24.322253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.286 [2024-04-24 05:26:24.322489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.286 [2024-04-24 05:26:24.322742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.286 [2024-04-24 05:26:24.322767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.286 [2024-04-24 05:26:24.322783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.286 [2024-04-24 05:26:24.326337] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.286 [2024-04-24 05:26:24.335571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.286 [2024-04-24 05:26:24.336003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.336183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.336213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.286 [2024-04-24 05:26:24.336232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.286 [2024-04-24 05:26:24.336470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.286 [2024-04-24 05:26:24.336724] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.286 [2024-04-24 05:26:24.336748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.286 [2024-04-24 05:26:24.336765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.286 [2024-04-24 05:26:24.340321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.286 [2024-04-24 05:26:24.349555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.286 [2024-04-24 05:26:24.349975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.350147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.286 [2024-04-24 05:26:24.350182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.286 [2024-04-24 05:26:24.350201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.286 [2024-04-24 05:26:24.350439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.350693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.350717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.350732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.354288] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.363541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.363973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.364141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.364170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.364189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.364426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.364678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.364703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.364718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.368275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.377543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.377964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.378138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.378167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.378185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.378422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.378674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.378698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.378714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.382271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.391511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.391899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.392072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.392101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.392123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.392361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.392602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.392625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.392653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.396261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.405526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.405960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.406133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.406162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.406180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.406418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.406670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.406695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.406710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.410268] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.419511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.419945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.420139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.420168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.420186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.420422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.420674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.420699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.420714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.424273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.433522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.433935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.434103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.434131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.434149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.434391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.434644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.434669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.434685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.438238] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.447479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.447904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.448084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.448113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.448131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.448368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.448610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.448645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.448663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.452219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.461471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.461907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.462072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.462101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.462119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.462357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.462598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.462621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.462648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.466204] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.287 [2024-04-24 05:26:24.475440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.287 [2024-04-24 05:26:24.475886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.476027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.287 [2024-04-24 05:26:24.476056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.287 [2024-04-24 05:26:24.476074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.287 [2024-04-24 05:26:24.476310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.287 [2024-04-24 05:26:24.476557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-04-24 05:26:24.476581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.287 [2024-04-24 05:26:24.476597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 [2024-04-24 05:26:24.480163] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.288 [2024-04-24 05:26:24.489393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.288 [2024-04-24 05:26:24.489809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.490004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.490033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.288 [2024-04-24 05:26:24.490050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.288 [2024-04-24 05:26:24.490288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.288 [2024-04-24 05:26:24.490530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.288 [2024-04-24 05:26:24.490555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.288 [2024-04-24 05:26:24.490572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.288 [2024-04-24 05:26:24.494134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.288 [2024-04-24 05:26:24.503386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.288 [2024-04-24 05:26:24.503804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.504002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.504030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.288 [2024-04-24 05:26:24.504048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.288 [2024-04-24 05:26:24.504285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.288 [2024-04-24 05:26:24.504527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.288 [2024-04-24 05:26:24.504552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.288 [2024-04-24 05:26:24.504568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.288 [2024-04-24 05:26:24.508130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.288 [2024-04-24 05:26:24.517364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.288 [2024-04-24 05:26:24.517783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.517954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.517982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.288 [2024-04-24 05:26:24.517999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.288 [2024-04-24 05:26:24.518236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.288 [2024-04-24 05:26:24.518479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.288 [2024-04-24 05:26:24.518509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.288 [2024-04-24 05:26:24.518526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.288 [2024-04-24 05:26:24.522088] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.288 [2024-04-24 05:26:24.531332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.288 [2024-04-24 05:26:24.531778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.531928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.531956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.288 [2024-04-24 05:26:24.531974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.288 [2024-04-24 05:26:24.532211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.288 [2024-04-24 05:26:24.532452] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.288 [2024-04-24 05:26:24.532476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.288 [2024-04-24 05:26:24.532493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.288 [2024-04-24 05:26:24.536060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.288 [2024-04-24 05:26:24.545293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.288 [2024-04-24 05:26:24.545699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.545875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.288 [2024-04-24 05:26:24.545902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.288 [2024-04-24 05:26:24.545920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.288 [2024-04-24 05:26:24.546158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.288 [2024-04-24 05:26:24.546400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.288 [2024-04-24 05:26:24.546424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.288 [2024-04-24 05:26:24.546440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.288 [2024-04-24 05:26:24.550005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.559243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.559678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.559827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.559856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.559874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.560113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.560355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.560380] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.560405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.563971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.573199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.573602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.573759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.573790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.573808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.574046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.574289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.574313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.574330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.577893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.587126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.587554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.587759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.587788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.587806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.588044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.588287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.588311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.588328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.591890] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.601127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.601519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.601730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.601759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.601777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.602016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.602259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.602283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.602299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.605871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.615104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.615530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.615711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.615740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.615758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.615995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.616236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.616261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.616278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.619839] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.629069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.629500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.629676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.629706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.547 [2024-04-24 05:26:24.629724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.547 [2024-04-24 05:26:24.629962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.547 [2024-04-24 05:26:24.630204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.547 [2024-04-24 05:26:24.630230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.547 [2024-04-24 05:26:24.630246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.547 [2024-04-24 05:26:24.633811] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.547 [2024-04-24 05:26:24.643043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.547 [2024-04-24 05:26:24.643468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.547 [2024-04-24 05:26:24.643644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.643673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.643690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.643926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.644180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.644205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.644222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.647786] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.657024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.657434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.657634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.657663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.657681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.657918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.658161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.658186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.658202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.661765] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.670995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.671416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.671560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.671587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.671604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.671852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.672093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.672118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.672135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.675948] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.684968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.685368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.685565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.685592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.685610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.685857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.686098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.686123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.686139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.689701] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.698928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.699342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.699545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.699573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.699591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.699839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.700080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.700104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.700121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.703681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.712917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.713347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.713539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.713567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.713585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.713833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.714075] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.714100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.714116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.717676] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.726904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.727331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.727499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.727526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.727543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.727792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.728034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.728059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.728075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.731633] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.740869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.741432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.741663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.741698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.741717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.741955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.742196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.742220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.742236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.745797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.754815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.755602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.755642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.755663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.755901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.756143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.756168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.756183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.759744] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.768766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.769193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.769473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.769525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.769544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.769793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.770036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.770060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.770076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.773633] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.782686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.783095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.783375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.783428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.783452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.783701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.783942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.783966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.783981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.787533] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.796560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.796991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.797188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.797252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.797271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.797508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.797760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.797784] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.797800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.801357] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.548 [2024-04-24 05:26:24.810387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.548 [2024-04-24 05:26:24.810847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.811042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.548 [2024-04-24 05:26:24.811071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.548 [2024-04-24 05:26:24.811090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.548 [2024-04-24 05:26:24.811327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.548 [2024-04-24 05:26:24.811569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.548 [2024-04-24 05:26:24.811593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.548 [2024-04-24 05:26:24.811609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.548 [2024-04-24 05:26:24.815167] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-04-24 05:26:24.824397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-04-24 05:26:24.824847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.825007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.825035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-04-24 05:26:24.825053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.807 [2024-04-24 05:26:24.825295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.807 [2024-04-24 05:26:24.825537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-04-24 05:26:24.825561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-04-24 05:26:24.825577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-04-24 05:26:24.829140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-04-24 05:26:24.838370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-04-24 05:26:24.838782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.839045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.839108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-04-24 05:26:24.839126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.807 [2024-04-24 05:26:24.839363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.807 [2024-04-24 05:26:24.839604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-04-24 05:26:24.839637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-04-24 05:26:24.839655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-04-24 05:26:24.843219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-04-24 05:26:24.852237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-04-24 05:26:24.852661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.852869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.852899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-04-24 05:26:24.852918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.807 [2024-04-24 05:26:24.853156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.807 [2024-04-24 05:26:24.853399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-04-24 05:26:24.853423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-04-24 05:26:24.853439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-04-24 05:26:24.857015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-04-24 05:26:24.866263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-04-24 05:26:24.866691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.866867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.866895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.807 [2024-04-24 05:26:24.866913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.807 [2024-04-24 05:26:24.867150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.807 [2024-04-24 05:26:24.867398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.807 [2024-04-24 05:26:24.867423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.807 [2024-04-24 05:26:24.867439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.807 [2024-04-24 05:26:24.871002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.807 [2024-04-24 05:26:24.880241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.807 [2024-04-24 05:26:24.880782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.881120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.807 [2024-04-24 05:26:24.881178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.881196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.881433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.881685] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.881710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.881726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.885279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.894099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.894523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.894727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.894756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.894774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.895012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.895253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.895277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.895293] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.898858] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.908113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.908676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.908868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.908897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.908915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.909153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.909396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.909426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.909443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.913016] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.922043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.922478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.922728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.922759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.922778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.923017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.923259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.923284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.923300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.926865] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.935894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.936331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.936501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.936529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.936547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.936798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.937040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.937064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.937080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.940641] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.949877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.950301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.950496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.950526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.950544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.950794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.951035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.951059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.951081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.954650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.963891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.964290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.964483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.964511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.964529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.964780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.965022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.965046] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.965062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.968616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.977852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.978349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.978548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.978576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.978593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.978843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.979087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.979112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.979129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.982690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:24.991724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:24.992278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.992577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:24.992606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:24.992625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:24.992877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:24.993119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:24.993143] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:24.993160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:24.996728] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.005533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.006063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.006408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.006458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.006476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.006727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.006968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.006993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.007009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:25.010563] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.019383] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.019804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.020012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.020042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.020060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.020298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.020540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.020566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.020581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:25.024149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.033385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.033826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.034020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.034050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.034068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.034306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.034550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.034574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.034590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:25.038185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.047212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.047618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.047769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.047797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.047815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.048052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.048293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.048317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.048333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:25.051900] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.061141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.061566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.061738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.061768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.061786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.062024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.062267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.062291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.062307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.808 [2024-04-24 05:26:25.065871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.808 [2024-04-24 05:26:25.075108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.808 [2024-04-24 05:26:25.075572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.075750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.808 [2024-04-24 05:26:25.075779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:47.808 [2024-04-24 05:26:25.075797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:47.808 [2024-04-24 05:26:25.076035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:47.808 [2024-04-24 05:26:25.076277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.808 [2024-04-24 05:26:25.076301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.808 [2024-04-24 05:26:25.076316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.067 [2024-04-24 05:26:25.079880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.067 [2024-04-24 05:26:25.089120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.067 [2024-04-24 05:26:25.089517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.089780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.089841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.067 [2024-04-24 05:26:25.089860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.067 [2024-04-24 05:26:25.090098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.067 [2024-04-24 05:26:25.090341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.067 [2024-04-24 05:26:25.090366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.067 [2024-04-24 05:26:25.090382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.067 [2024-04-24 05:26:25.093946] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.067 [2024-04-24 05:26:25.102979] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.067 [2024-04-24 05:26:25.103412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.103577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.103605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.067 [2024-04-24 05:26:25.103622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.067 [2024-04-24 05:26:25.103872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.067 [2024-04-24 05:26:25.104112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.067 [2024-04-24 05:26:25.104137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.067 [2024-04-24 05:26:25.104153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.067 [2024-04-24 05:26:25.107712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.067 [2024-04-24 05:26:25.116943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.067 [2024-04-24 05:26:25.117354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.117525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.067 [2024-04-24 05:26:25.117553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.117570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.117822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.118063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.118087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.118103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.121667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.130904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.131327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.131495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.131528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.131546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.131797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.132039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.132063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.132079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.135640] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.144876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.145439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.145650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.145680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.145697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.145936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.146177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.146201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.146217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.149783] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.158822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.159326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.159568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.159615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.159645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.159886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.160126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.160151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.160167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.163729] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.172767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.173195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.173409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.173439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.173463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.173714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.173956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.173980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.173996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.177551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.186581] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.187087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.187349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.187379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.187397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.187646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.187890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.187914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.187929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.191486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.200516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.200930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.201126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.201174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.201193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.201430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.201687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.201712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.201729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.205282] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.214535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.214970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.215164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.215192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.215209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.215454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.215711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.215736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.068 [2024-04-24 05:26:25.215751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.068 [2024-04-24 05:26:25.219306] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.068 [2024-04-24 05:26:25.228541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-04-24 05:26:25.228967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.229165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.068 [2024-04-24 05:26:25.229193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.068 [2024-04-24 05:26:25.229212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.068 [2024-04-24 05:26:25.229450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.068 [2024-04-24 05:26:25.229705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.068 [2024-04-24 05:26:25.229730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.229745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.233301] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.242535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.242957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.243183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.243231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.243250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.243488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.243742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.243767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.243782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.247339] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.256374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.256822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.256996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.257024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.257042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.257280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.257528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.257553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.257569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.261137] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.270390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.270830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.271026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.271080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.271100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.271338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.271581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.271605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.271622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.275196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.284232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.284670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.284872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.284900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.284918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.285157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.285400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.285424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.285440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.289008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.298242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.298668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.298915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.298944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.298962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.299201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.299442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.299472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.299488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.303058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.312099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.312526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.312725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.312756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.312776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.313014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.313257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.313280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.313296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.316858] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.069 [2024-04-24 05:26:25.326089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.069 [2024-04-24 05:26:25.326659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.326884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.069 [2024-04-24 05:26:25.326913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.069 [2024-04-24 05:26:25.326931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.069 [2024-04-24 05:26:25.327168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.069 [2024-04-24 05:26:25.327409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.069 [2024-04-24 05:26:25.327433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.069 [2024-04-24 05:26:25.327449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.069 [2024-04-24 05:26:25.331007] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.340030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.340459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.340653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.340683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.340702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.340939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.341180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.341204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.341226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.344789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.354033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.354521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.354721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.354750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.354768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.355005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.355247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.355270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.355286] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.358851] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.367910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.368389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.368529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.368557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.368575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.368823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.369064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.369087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.369103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.372665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.381920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.382377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.382549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.382577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.382596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.382843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.383085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.383109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.383126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.386695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.395936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.396360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.396528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.396557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.396575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.396823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.397065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.397088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.397104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.400661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.409889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.410323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.410515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.410544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.410562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.410808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.411050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.411074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.411090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.414649] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.423900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.424308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.424549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.424599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.329 [2024-04-24 05:26:25.424618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.329 [2024-04-24 05:26:25.424865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.329 [2024-04-24 05:26:25.425106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.329 [2024-04-24 05:26:25.425131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.329 [2024-04-24 05:26:25.425147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.329 [2024-04-24 05:26:25.428712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.329 [2024-04-24 05:26:25.437749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.329 [2024-04-24 05:26:25.438177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.438344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.329 [2024-04-24 05:26:25.438371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.438389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.438625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.438878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.438902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.438918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.442468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.451729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.452152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.452344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.452392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.452411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.452666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.452908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.452932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.452949] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.456503] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.465750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.466186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.466431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.466460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.466478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.466725] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.466967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.466991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.467008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.470560] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.479591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.480020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.480222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.480251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.480269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.480507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.480764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.480789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.480806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.484363] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.493614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.494021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.494198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.494226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.494244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.494481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.494734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.494758] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.494774] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.498325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.507567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.507978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.508225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.508254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.508272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.508511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.508763] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.508788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.508805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.512360] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.521389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.521823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.522027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.522085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.522104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.522342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.522583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.522606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.522622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.526206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.535236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.535671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.535816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.535845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.535863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.536100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.536342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.536365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.536381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.330 [2024-04-24 05:26:25.539945] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.330 [2024-04-24 05:26:25.549189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.330 [2024-04-24 05:26:25.549592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.549774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.330 [2024-04-24 05:26:25.549803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.330 [2024-04-24 05:26:25.549822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.330 [2024-04-24 05:26:25.550059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.330 [2024-04-24 05:26:25.550301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.330 [2024-04-24 05:26:25.550325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.330 [2024-04-24 05:26:25.550340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-04-24 05:26:25.553905] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-04-24 05:26:25.563152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-04-24 05:26:25.563577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.563745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.563774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-04-24 05:26:25.563798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.331 [2024-04-24 05:26:25.564035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.331 [2024-04-24 05:26:25.564281] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-04-24 05:26:25.564306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-04-24 05:26:25.564322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-04-24 05:26:25.567883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-04-24 05:26:25.577117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-04-24 05:26:25.577548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.577716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.577747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-04-24 05:26:25.577765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.331 [2024-04-24 05:26:25.578004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.331 [2024-04-24 05:26:25.578246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-04-24 05:26:25.578271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-04-24 05:26:25.578287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-04-24 05:26:25.581849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.331 [2024-04-24 05:26:25.591087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.331 [2024-04-24 05:26:25.591515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.591700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.331 [2024-04-24 05:26:25.591731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.331 [2024-04-24 05:26:25.591749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.331 [2024-04-24 05:26:25.591987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.331 [2024-04-24 05:26:25.592229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.331 [2024-04-24 05:26:25.592253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.331 [2024-04-24 05:26:25.592268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.331 [2024-04-24 05:26:25.595836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.591 [2024-04-24 05:26:25.605080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.591 [2024-04-24 05:26:25.605479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2017069 Killed "${NVMF_APP[@]}" "$@" 00:30:48.591 [2024-04-24 05:26:25.605745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.591 [2024-04-24 05:26:25.605774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.605798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.606036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 05:26:25 -- host/bdevperf.sh@36 -- # tgt_init 00:30:48.592 05:26:25 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:48.592 [2024-04-24 05:26:25.606278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.606302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.606318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 05:26:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:48.592 05:26:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:48.592 05:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:48.592 [2024-04-24 05:26:25.609881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 05:26:25 -- nvmf/common.sh@470 -- # nvmfpid=2018117 00:30:48.592 05:26:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:48.592 05:26:25 -- nvmf/common.sh@471 -- # waitforlisten 2018117 00:30:48.592 05:26:25 -- common/autotest_common.sh@817 -- # '[' -z 2018117 ']' 00:30:48.592 05:26:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.592 05:26:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.592 05:26:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.592 05:26:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.592 05:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:48.592 [2024-04-24 05:26:25.618915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.620115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.620341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.620372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.620392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.620642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.620885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.620909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.620925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.624479] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.632884] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.633291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.633469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.633499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.633517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.633766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.634013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.634038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.634054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.637609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.646873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.647285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.647487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.647516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.647535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.647782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.648025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.648049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.648065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.651634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.658028] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:30:48.592 [2024-04-24 05:26:25.658096] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.592 [2024-04-24 05:26:25.660874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.661318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.661486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.661514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.661533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.661780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.662021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.662045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.662061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.665614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.674862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.675277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.675453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.675481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.675499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.675753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.675994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.676017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.676033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.679584] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.688819] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.689217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.689392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.689421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.689439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.689686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.689928] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.689952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.689967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.693518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.592 [2024-04-24 05:26:25.702765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.703169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.703365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.703393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.592 [2024-04-24 05:26:25.703411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.592 [2024-04-24 05:26:25.703844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.592 [2024-04-24 05:26:25.703899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:48.592 [2024-04-24 05:26:25.704087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.592 [2024-04-24 05:26:25.704110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.592 [2024-04-24 05:26:25.704125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.592 [2024-04-24 05:26:25.707684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.592 [2024-04-24 05:26:25.716715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.592 [2024-04-24 05:26:25.717153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.717331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.592 [2024-04-24 05:26:25.717362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.717380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.717624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.717876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.717900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.717916] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.721466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.730715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.731144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.731310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.731338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.731356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.731593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.731845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.731870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.731886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.734411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.593 [2024-04-24 05:26:25.735439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.744709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.745276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.745472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.745501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.745523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.745780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.746028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.746052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.746072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.749636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.758684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.759158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.759339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.759369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.759389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.759651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.759895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.759919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.759938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.763489] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.772513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.772962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.773115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.773144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.773163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.773400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.773652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.773676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.773693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.777248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.786484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.786975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.787162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.787193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.787215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.787461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.787718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.787743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.787762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.791321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.800370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.800894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.801083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.801112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.801133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.801379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.801642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.801666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.801685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.805382] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.814212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.814675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.814854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.814885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.814904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.815145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.815387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.815411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.815428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.818988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.828232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.593 [2024-04-24 05:26:25.828624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.828766] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.593 [2024-04-24 05:26:25.828803] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.593 [2024-04-24 05:26:25.828810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.593 [2024-04-24 05:26:25.828821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.593 [2024-04-24 05:26:25.828835] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.593 [2024-04-24 05:26:25.828838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.593 [2024-04-24 05:26:25.828847] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.593 [2024-04-24 05:26:25.828856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.593 [2024-04-24 05:26:25.828935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.593 [2024-04-24 05:26:25.828991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.593 [2024-04-24 05:26:25.828995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.593 [2024-04-24 05:26:25.829096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.593 [2024-04-24 05:26:25.829337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.593 [2024-04-24 05:26:25.829361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.593 [2024-04-24 05:26:25.829378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.593 [2024-04-24 05:26:25.832938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.593 [2024-04-24 05:26:25.842207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.594 [2024-04-24 05:26:25.842775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.594 [2024-04-24 05:26:25.842968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.594 [2024-04-24 05:26:25.842998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.594 [2024-04-24 05:26:25.843020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.594 [2024-04-24 05:26:25.843269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.594 [2024-04-24 05:26:25.843517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.594 [2024-04-24 05:26:25.843542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.594 [2024-04-24 05:26:25.843562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.594 [2024-04-24 05:26:25.847127] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.594 [2024-04-24 05:26:25.856175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.594 [2024-04-24 05:26:25.856706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.594 [2024-04-24 05:26:25.856903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.594 [2024-04-24 05:26:25.856933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.594 [2024-04-24 05:26:25.856956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.594 [2024-04-24 05:26:25.857224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.594 [2024-04-24 05:26:25.857470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.594 [2024-04-24 05:26:25.857495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.594 [2024-04-24 05:26:25.857514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.853 [2024-04-24 05:26:25.861122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.853 [2024-04-24 05:26:25.870195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.853 [2024-04-24 05:26:25.870742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.870992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.871021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.853 [2024-04-24 05:26:25.871043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.853 [2024-04-24 05:26:25.871292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.853 [2024-04-24 05:26:25.871539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.853 [2024-04-24 05:26:25.871563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.853 [2024-04-24 05:26:25.871583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.853 [2024-04-24 05:26:25.875171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.853 [2024-04-24 05:26:25.884231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.853 [2024-04-24 05:26:25.884760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.884962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.884991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.853 [2024-04-24 05:26:25.885013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.853 [2024-04-24 05:26:25.885259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.853 [2024-04-24 05:26:25.885505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.853 [2024-04-24 05:26:25.885530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.853 [2024-04-24 05:26:25.885548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.853 [2024-04-24 05:26:25.889110] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.853 [2024-04-24 05:26:25.898160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.853 [2024-04-24 05:26:25.898779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.899021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.899050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.853 [2024-04-24 05:26:25.899072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.853 [2024-04-24 05:26:25.899322] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.853 [2024-04-24 05:26:25.899568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.853 [2024-04-24 05:26:25.899592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.853 [2024-04-24 05:26:25.899611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.853 [2024-04-24 05:26:25.903182] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.853 [2024-04-24 05:26:25.912227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.853 [2024-04-24 05:26:25.912765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.912973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-04-24 05:26:25.913003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.853 [2024-04-24 05:26:25.913024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.853 [2024-04-24 05:26:25.913272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.853 [2024-04-24 05:26:25.913516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.853 [2024-04-24 05:26:25.913540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.853 [2024-04-24 05:26:25.913557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.917136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 [2024-04-24 05:26:25.926168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.926580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.926745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.926782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.926802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.927040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.927281] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.927304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.927320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.930798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 [2024-04-24 05:26:25.939761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.940148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.940312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.940337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.940353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.940581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.940822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.940845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.940859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 05:26:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.854 05:26:25 -- common/autotest_common.sh@850 -- # return 0 00:30:48.854 05:26:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:48.854 05:26:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:48.854 05:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:48.854 [2024-04-24 05:26:25.944122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 [2024-04-24 05:26:25.953365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.953753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.953950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.953976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.953992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.954233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.954445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.954465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.954479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.957721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 05:26:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.854 05:26:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.854 05:26:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.854 05:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:48.854 [2024-04-24 05:26:25.966518] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.854 [2024-04-24 05:26:25.966814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.967238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.967378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.967403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.967420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.967676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.967894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.967927] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.967955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.971117] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 05:26:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.854 05:26:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:48.854 05:26:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.854 05:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:48.854 [2024-04-24 05:26:25.980384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.980736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.980934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.980959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.980976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.981225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.981423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.981442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.981455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.984569] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 [2024-04-24 05:26:25.993993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:25.994439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.994603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:25.994637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:25.994657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:25.994878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:25.995119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:25.995148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:25.995165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:25.998296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 Malloc0 00:30:48.854 05:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.854 05:26:26 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:48.854 05:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.854 05:26:26 -- common/autotest_common.sh@10 -- # set +x 00:30:48.854 [2024-04-24 05:26:26.007511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:26.008051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:26.008236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:26.008264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:26.008284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:26.008521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:26.008770] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.854 [2024-04-24 05:26:26.008794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.854 [2024-04-24 05:26:26.008811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.854 [2024-04-24 05:26:26.012032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.854 05:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.854 05:26:26 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.854 05:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.854 05:26:26 -- common/autotest_common.sh@10 -- # set +x 00:30:48.854 [2024-04-24 05:26:26.021018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.854 [2024-04-24 05:26:26.021455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:26.021618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-04-24 05:26:26.021654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a07610 with addr=10.0.0.2, port=4420 00:30:48.854 [2024-04-24 05:26:26.021672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07610 is same with the state(5) to be set 00:30:48.854 [2024-04-24 05:26:26.021886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07610 (9): Bad file descriptor 00:30:48.854 [2024-04-24 05:26:26.022125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.855 [2024-04-24 05:26:26.022146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.855 [2024-04-24 05:26:26.022160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.855 05:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.855 05:26:26 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.855 05:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.855 05:26:26 -- common/autotest_common.sh@10 -- # set +x 00:30:48.855 [2024-04-24 05:26:26.025480] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.855 [2024-04-24 05:26:26.026356] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.855 05:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.855 05:26:26 -- host/bdevperf.sh@38 -- # wait 2017353 00:30:48.855 [2024-04-24 05:26:26.034529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.113 [2024-04-24 05:26:26.154332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:59.080 00:30:59.080 Latency(us) 00:30:59.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.080 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:59.080 Verification LBA range: start 0x0 length 0x4000 00:30:59.080 Nvme1n1 : 15.01 6625.27 25.88 8619.09 0.00 8372.28 861.68 18058.81 00:30:59.080 =================================================================================================================== 00:30:59.080 Total : 6625.27 25.88 8619.09 0.00 8372.28 861.68 18058.81 00:30:59.080 05:26:35 -- host/bdevperf.sh@39 -- # sync 00:30:59.080 05:26:35 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.080 05:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.080 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:30:59.080 05:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.080 05:26:35 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:59.080 05:26:35 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:59.080 05:26:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:59.080 05:26:35 -- nvmf/common.sh@117 -- # sync 00:30:59.080 05:26:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.080 05:26:35 -- nvmf/common.sh@120 -- # set +e 00:30:59.080 05:26:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.080 05:26:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.080 rmmod nvme_tcp 00:30:59.080 rmmod nvme_fabrics 00:30:59.080 rmmod nvme_keyring 00:30:59.080 05:26:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.080 05:26:35 -- nvmf/common.sh@124 -- # set -e 00:30:59.080 05:26:35 -- nvmf/common.sh@125 -- # return 0 00:30:59.080 05:26:35 -- nvmf/common.sh@478 -- # '[' -n 2018117 ']' 00:30:59.080 05:26:35 -- nvmf/common.sh@479 -- # killprocess 2018117 00:30:59.080 05:26:35 -- common/autotest_common.sh@936 -- # '[' -z 2018117 ']' 00:30:59.080 05:26:35 -- common/autotest_common.sh@940 -- # kill -0 2018117 00:30:59.080 05:26:35 -- common/autotest_common.sh@941 -- # uname 00:30:59.080 05:26:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:59.080 05:26:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2018117 00:30:59.080 05:26:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:59.080 05:26:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:59.080 05:26:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2018117' 00:30:59.080 killing process with pid 2018117 00:30:59.080 05:26:35 -- common/autotest_common.sh@955 -- # kill 2018117 00:30:59.080 05:26:35 -- common/autotest_common.sh@960 -- # wait 2018117 00:30:59.080 05:26:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:59.080 05:26:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:59.080 05:26:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:59.080 05:26:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.080 05:26:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:59.080 05:26:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.080 05:26:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.080 05:26:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.983 05:26:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:00.983 00:31:00.983 real 0m22.438s 00:31:00.983 user 1m0.638s 00:31:00.983 sys 0m4.007s 00:31:00.983 05:26:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:00.983 05:26:37 -- common/autotest_common.sh@10 -- # set +x 00:31:00.983 ************************************ 00:31:00.983 END TEST nvmf_bdevperf 00:31:00.983 ************************************ 00:31:00.983 05:26:37 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:00.983 05:26:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:00.983 05:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:00.983 05:26:37 -- common/autotest_common.sh@10 -- # set +x 00:31:00.983 ************************************ 00:31:00.983 START TEST nvmf_target_disconnect 00:31:00.983 ************************************ 00:31:00.983 05:26:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:00.983 * Looking for test storage... 00:31:00.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:00.983 05:26:37 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.983 05:26:37 -- nvmf/common.sh@7 -- # uname -s 00:31:00.983 05:26:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.983 05:26:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.983 05:26:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.983 05:26:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.983 05:26:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.983 05:26:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.983 05:26:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.983 05:26:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.983 05:26:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.983 05:26:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.983 05:26:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:00.983 05:26:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:00.983 05:26:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.983 05:26:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.983 05:26:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.983 05:26:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.983 05:26:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.983 05:26:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.983 05:26:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.983 05:26:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.983 05:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.983 05:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.983 05:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.983 05:26:37 -- paths/export.sh@5 -- # export PATH 00:31:00.983 05:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.983 05:26:37 -- nvmf/common.sh@47 -- # : 0 00:31:00.983 05:26:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.983 05:26:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.983 05:26:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.983 05:26:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.983 05:26:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.983 05:26:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.983 05:26:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.983 05:26:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.983 05:26:37 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:00.983 05:26:37 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:00.983 05:26:37 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:00.983 05:26:37 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:00.983 05:26:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:00.983 05:26:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.983 05:26:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:00.983 05:26:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:00.983 05:26:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:00.983 05:26:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.983 05:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.983 05:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.983 05:26:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:00.983 05:26:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:00.983 05:26:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:00.983 05:26:37 -- common/autotest_common.sh@10 -- # set +x 00:31:02.891 05:26:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:02.891 05:26:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:02.891 05:26:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:02.891 05:26:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:02.891 05:26:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:02.891 05:26:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:02.891 05:26:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:02.891 05:26:39 -- nvmf/common.sh@295 -- # net_devs=() 00:31:02.891 05:26:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:02.891 05:26:39 -- nvmf/common.sh@296 -- # e810=() 00:31:02.891 05:26:39 -- nvmf/common.sh@296 -- # local -ga e810 00:31:02.891 05:26:39 -- nvmf/common.sh@297 -- # x722=() 00:31:02.891 05:26:39 -- nvmf/common.sh@297 -- # local -ga x722 00:31:02.891 05:26:39 -- nvmf/common.sh@298 -- # mlx=() 00:31:02.891 05:26:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:02.891 05:26:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.891 05:26:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:02.891 05:26:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:02.891 05:26:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.891 05:26:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:02.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:02.891 05:26:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.891 05:26:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:02.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:02.891 05:26:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.891 05:26:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.891 05:26:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.891 05:26:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:02.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:02.891 05:26:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.891 05:26:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.891 05:26:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.891 05:26:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.891 05:26:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:02.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:02.891 05:26:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.891 05:26:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:02.891 05:26:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:02.891 05:26:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:02.891 05:26:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.891 05:26:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.891 05:26:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.891 05:26:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:02.891 05:26:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.891 05:26:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.891 05:26:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:02.891 05:26:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.891 05:26:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.891 05:26:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:02.891 05:26:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:02.891 05:26:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.891 05:26:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.891 05:26:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.891 05:26:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.891 05:26:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:02.891 05:26:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.891 05:26:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.891 05:26:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.891 05:26:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:02.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:02.891 00:31:02.891 --- 10.0.0.2 ping statistics --- 00:31:02.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.891 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:02.891 05:26:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:31:02.891 00:31:02.891 --- 10.0.0.1 ping statistics --- 00:31:02.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.891 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:02.891 05:26:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.891 05:26:40 -- nvmf/common.sh@411 -- # return 0 00:31:02.891 05:26:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:02.891 05:26:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.891 05:26:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:02.891 05:26:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:02.891 05:26:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.891 05:26:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:02.891 05:26:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:02.891 05:26:40 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:02.891 05:26:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:02.891 05:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:02.891 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.150 ************************************ 00:31:03.150 START TEST nvmf_target_disconnect_tc1 00:31:03.150 ************************************ 00:31:03.150 05:26:40 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:31:03.150 05:26:40 -- host/target_disconnect.sh@32 -- # set +e 00:31:03.150 05:26:40 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.150 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.150 [2024-04-24 05:26:40.285341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.150 [2024-04-24 05:26:40.285555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.150 [2024-04-24 05:26:40.285585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152fd80 with addr=10.0.0.2, port=4420 00:31:03.150 [2024-04-24 05:26:40.285622] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:03.150 [2024-04-24 05:26:40.285659] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:03.150 [2024-04-24 05:26:40.285681] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:03.150 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:03.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:03.150 Initializing NVMe Controllers 00:31:03.150 05:26:40 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:03.150 05:26:40 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:03.150 05:26:40 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:31:03.150 05:26:40 -- common/autotest_common.sh@1139 -- # return 0 00:31:03.150 05:26:40 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:03.150 05:26:40 -- host/target_disconnect.sh@41 -- # set -e 00:31:03.150 00:31:03.150 real 0m0.094s 00:31:03.150 user 0m0.039s 00:31:03.150 sys 0m0.054s 00:31:03.150 05:26:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:03.150 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.150 ************************************ 00:31:03.150 END TEST nvmf_target_disconnect_tc1 00:31:03.150 ************************************ 00:31:03.150 05:26:40 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:03.150 05:26:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:03.150 05:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:03.150 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.150 ************************************ 00:31:03.150 START TEST nvmf_target_disconnect_tc2 00:31:03.150 ************************************ 00:31:03.150 05:26:40 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:31:03.150 05:26:40 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:03.150 05:26:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:03.150 05:26:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:03.150 05:26:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:03.150 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.150 05:26:40 -- nvmf/common.sh@470 -- # nvmfpid=2021311 00:31:03.150 05:26:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:03.150 05:26:40 -- nvmf/common.sh@471 -- # waitforlisten 2021311 00:31:03.150 05:26:40 -- common/autotest_common.sh@817 -- # '[' -z 2021311 ']' 00:31:03.150 05:26:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.150 05:26:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:03.150 05:26:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.150 05:26:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:03.150 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.409 [2024-04-24 05:26:40.462601] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:31:03.409 [2024-04-24 05:26:40.462712] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.409 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.409 [2024-04-24 05:26:40.502346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:03.409 [2024-04-24 05:26:40.529347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.409 [2024-04-24 05:26:40.616940] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.409 [2024-04-24 05:26:40.616999] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.409 [2024-04-24 05:26:40.617012] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.409 [2024-04-24 05:26:40.617023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.409 [2024-04-24 05:26:40.617034] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.409 [2024-04-24 05:26:40.617134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:03.409 [2024-04-24 05:26:40.617198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:03.409 [2024-04-24 05:26:40.617240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:03.409 [2024-04-24 05:26:40.617243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:03.667 05:26:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:03.667 05:26:40 -- common/autotest_common.sh@850 -- # return 0 00:31:03.667 05:26:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:03.667 05:26:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 05:26:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.667 05:26:40 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 Malloc0 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 [2024-04-24 05:26:40.781265] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 [2024-04-24 05:26:40.809510] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.667 05:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.667 05:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:03.667 05:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.667 05:26:40 -- host/target_disconnect.sh@50 -- # reconnectpid=2021332 00:31:03.667 05:26:40 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:03.667 05:26:40 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.667 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.564 05:26:42 -- host/target_disconnect.sh@53 -- # kill -9 2021311 00:31:05.564 05:26:42 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:05.564 Write completed with error (sct=0, sc=8) 00:31:05.564 starting I/O failed 00:31:05.564 Write completed with error (sct=0, sc=8) 00:31:05.564 starting I/O failed 00:31:05.564 Read completed with error (sct=0, sc=8) 00:31:05.564 starting I/O failed 00:31:05.564 Write completed with error (sct=0, sc=8) 00:31:05.564 starting I/O failed 00:31:05.564 Read completed with error (sct=0, sc=8) 00:31:05.564 starting I/O failed 00:31:05.564 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 [2024-04-24 05:26:42.833579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 [2024-04-24 05:26:42.833934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Read completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 Write completed with error (sct=0, sc=8) 00:31:05.565 starting I/O failed 00:31:05.565 [2024-04-24 05:26:42.834275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:05.565 [2024-04-24 05:26:42.834482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.834686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.834715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.565 qpair failed and we were unable to recover it. 00:31:05.565 [2024-04-24 05:26:42.834859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.834997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.835023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.565 qpair failed and we were unable to recover it. 00:31:05.565 [2024-04-24 05:26:42.835178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.835338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.565 [2024-04-24 05:26:42.835382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.836 qpair failed and we were unable to recover it. 00:31:05.836 [2024-04-24 05:26:42.835587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.835764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.835791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.836 qpair failed and we were unable to recover it. 00:31:05.836 [2024-04-24 05:26:42.835936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.836 qpair failed and we were unable to recover it. 00:31:05.836 [2024-04-24 05:26:42.836292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.836 qpair failed and we were unable to recover it. 00:31:05.836 [2024-04-24 05:26:42.836572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.836750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.836 qpair failed and we were unable to recover it. 00:31:05.836 [2024-04-24 05:26:42.836883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.836 [2024-04-24 05:26:42.837035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.837060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.837193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.837345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.837386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.837561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.837709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.837736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.837887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.838255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.838691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.838849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.839000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.839290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.839600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.839781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.839913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.840280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.840671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.840880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.841100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.841330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.841355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.841609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.841792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.841817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.841973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.842409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.842776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.842962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.843179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.843509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.843829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.843992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.844154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.844323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.844352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.844507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.844657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.844685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.844838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.845325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.845751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.845941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 [2024-04-24 05:26:42.846145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.846310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.837 [2024-04-24 05:26:42.846339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.837 qpair failed and we were unable to recover it. 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.837 Read completed with error (sct=0, sc=8) 00:31:05.837 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Write completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Write completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Write completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Write completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Read completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 Write completed with error (sct=0, sc=8) 00:31:05.838 starting I/O failed 00:31:05.838 [2024-04-24 05:26:42.846710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:05.838 [2024-04-24 05:26:42.846841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.847264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.847616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.847790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.847997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.848136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.848163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.848331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.848557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.848609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.848849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.849182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.849634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.849816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.849998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.850437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.850822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.850976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.851155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.851276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.851302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.851475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.851737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.851764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.851947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.852151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.852181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.852356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.852500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.852526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.852755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.852960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.853003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.853248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.853410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.853437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.853667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.853821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.853848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.853971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.854401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.854765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.854946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.855093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.855270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.855296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.838 [2024-04-24 05:26:42.855452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.855638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.838 [2024-04-24 05:26:42.855665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.838 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.855796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.855922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.855949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.856148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.856401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.856429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.856604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.856744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.856771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.856920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.857314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.857642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.857847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.858080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.858218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.858263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.858426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.858624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.858661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.858872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.859271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.859640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.859836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.860023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.860379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.860763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.860990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.861190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.861398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.861427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.861615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.861787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.861815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.861986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.862363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.862737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.862908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.863090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.863275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.863301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.863481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.863634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.863662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.863861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.864233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.864616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.864798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.864969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.865302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.865666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.865838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.839 qpair failed and we were unable to recover it. 00:31:05.839 [2024-04-24 05:26:42.866000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.866151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.839 [2024-04-24 05:26:42.866194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.866394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.866594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.866640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.866848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.867300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.867657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.867849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.868018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.868211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.868244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.868400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.868554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.868579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.868784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.869239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.869574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.869791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.869995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.870157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.870185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.870377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.870619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.870659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.870873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.871192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.871557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.871766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.872043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.872248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.872292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.872495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.872688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.872718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.872881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.873389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.873756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.873931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.874140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.874317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.874343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.874512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.874653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.874680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.874833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.875236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.875623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.875850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.876018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.876218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.876251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.876430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.876639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.876666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.876843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.877127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.877156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.877396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.877550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.840 [2024-04-24 05:26:42.877575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.840 qpair failed and we were unable to recover it. 00:31:05.840 [2024-04-24 05:26:42.877736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.877866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.877892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.878071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.878409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.878816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.878970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.879144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.879320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.879350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.879544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.879753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.879779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.879903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.880247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.880655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.880829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.881002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.881398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.881772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.881962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.882120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.882261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.882303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.882491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.882647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.882676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.882837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.883196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.883572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.883752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.883913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.884279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.884577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.884803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.884976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.885172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.841 [2024-04-24 05:26:42.885201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.841 qpair failed and we were unable to recover it. 00:31:05.841 [2024-04-24 05:26:42.885370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.885520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.885546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.885699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.885912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.885938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.886116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.886258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.886287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.886437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.886592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.886620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.886834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.886994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.887188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.887556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.887828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.887977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.888126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.888446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.888780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.888974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.889117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.889481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.889761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.889940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.890127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.890480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.890806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.890963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.891149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.891471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.891784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.891979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.892147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.892304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.892332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.892506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.892658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.892686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.892838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.892972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.893001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.893190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.893362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.893388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.893513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.893660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.893686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.893839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.894245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.894609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.894796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.842 qpair failed and we were unable to recover it. 00:31:05.842 [2024-04-24 05:26:42.894915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.895078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.842 [2024-04-24 05:26:42.895108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.895277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.895484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.895510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.895663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.895841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.895867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.896017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.896408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.896788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.896966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.897144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.897292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.897319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.897488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.897691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.897721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.897881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.898226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.898612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.898867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.899004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.899185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.899227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.899396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.899558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.899587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.899810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.899986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.900012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.900165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.900315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.900341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.900462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.900643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.900670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.900862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.901226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.901604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.901841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.901997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.902393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.902787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.902988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.903160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.903356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.903385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.903580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.903729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.903774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.903915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.904285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.904600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.904806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.904982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.905160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.905186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.905383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.905619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.843 [2024-04-24 05:26:42.905698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.843 qpair failed and we were unable to recover it. 00:31:05.843 [2024-04-24 05:26:42.905879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.906305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.906607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.906821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.907036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.907411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.907777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.907971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.908138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.908499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.908841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.908995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.909189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.909311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.909336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.909513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.909685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.909715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.909910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.910281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.910642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.910832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.911011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.911390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.911743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.911934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.912088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.912422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.912722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.912880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.913025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.913415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.913800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.913982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.914149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.914418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.914467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.914662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.914861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.914889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.915055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.915449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.915789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.915971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.844 qpair failed and we were unable to recover it. 00:31:05.844 [2024-04-24 05:26:42.916100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.916279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.844 [2024-04-24 05:26:42.916305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.916498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.916618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.916652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.916785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.916908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.916934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.917107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.917282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.917308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.917425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.917576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.917602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.917791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.917982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.918010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.918214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.918389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.918414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.918616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.918769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.918795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.918910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.919252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.919655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.919859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.920015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.920377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.920793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.920961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.921135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.921430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.921778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.921962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.922089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.922251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.922292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.922455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.922650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.922679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.922848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.923236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.923655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.923831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.923995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.924336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.924740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.924917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.925029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.925346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.925722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.925888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.926080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.926271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.845 [2024-04-24 05:26:42.926297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.845 qpair failed and we were unable to recover it. 00:31:05.845 [2024-04-24 05:26:42.926444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.926597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.926625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.926835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.927216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.927537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.927737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.927890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.928377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.928717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.928893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.929048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.929451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.929825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.929998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.930126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.930302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.930332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.930513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.930666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.930692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.930843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.930995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.931026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.931193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.931361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.931391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.931563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.931714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.931741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.931924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.932279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.932566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.932741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.932889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.933224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.933614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.933847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.934022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.934220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.934272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.934446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.934620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.934674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.934867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.935067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.935093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.846 qpair failed and we were unable to recover it. 00:31:05.846 [2024-04-24 05:26:42.935247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.935400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.846 [2024-04-24 05:26:42.935426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.935580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.935732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.935759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.935900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.936280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.936637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.936847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.937011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.937216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.937271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.937447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.937622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.937671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.937874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.938172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.938560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.938787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.938982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.939367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.939782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.939987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.940108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.940471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.940763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.940966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.941139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.941386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.941412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.941576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.941743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.941772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.941944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.942257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.942650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.942828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.943005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.943433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.943763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.943975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.944158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.944456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.944754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.944912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.945055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.945204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.945248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.945419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.945610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.847 [2024-04-24 05:26:42.945645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.847 qpair failed and we were unable to recover it. 00:31:05.847 [2024-04-24 05:26:42.945843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.946237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.946559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.946778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.946915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.947268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.947622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.947833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.948004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.948178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.948203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.948380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.948506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.948532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.948782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.949263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.949670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.949929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.950097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.950261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.950289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.950468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.950643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.950670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.950850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.951206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.951670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.951939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.952137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.952311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.952351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.952560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.952804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.952831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.953000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.953347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.953705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.953958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.954197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.954366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.954393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.954591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.954829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.954855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.955068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.955345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.955394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.955576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.955737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.955763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.955964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.956228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.956283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.956432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.956636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.956676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.956835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.956990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.957031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.848 qpair failed and we were unable to recover it. 00:31:05.848 [2024-04-24 05:26:42.957227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.957355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.848 [2024-04-24 05:26:42.957381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.957719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.957898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.957925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.958121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.958307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.958347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.958543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.958763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.958794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.958963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.959359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.959744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.959937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.960106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.960272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.960300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.960473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.960647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.960674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.960802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.960979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.961005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.961177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.961422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.961447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.961688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.962265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.962700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.962931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.963086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.963325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.963354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.963509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.963676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.963884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.964236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.964686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.964904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.965101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.965383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.965435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.965609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.965745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.965770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.965938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.966287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.966673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.966844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.967010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.967331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.967614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.967887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.968040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.968218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.968243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.968386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.968506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.968533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.849 qpair failed and we were unable to recover it. 00:31:05.849 [2024-04-24 05:26:42.968738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.849 [2024-04-24 05:26:42.968874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.968901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.969058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.969224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.969253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.969487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.969683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.969753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.969968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.970363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.970735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.970950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.971134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.971481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.971531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.971673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.971873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.971899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.972108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.972303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.972329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.972456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.972619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.972653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.972867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.973197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.973585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.973771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.973947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.974381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.974739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.974957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.975182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.975344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.975386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.975549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.975679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.975708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.975856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.976230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.976688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.976889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.977077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.977273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.977325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.977514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.977720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.977770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.977911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.978110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.978138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.978301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.978441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.850 [2024-04-24 05:26:42.978469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.850 qpair failed and we were unable to recover it. 00:31:05.850 [2024-04-24 05:26:42.978607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.978781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.978810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.978964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.979345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.979675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.979961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.980721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.980933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.980963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.981141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.981367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.981393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.981571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.981766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.981795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.981991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.982261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.982312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.982678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.982707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.982901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.983322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.983715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.983878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.984029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.984215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.984243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.984445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.984626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.984661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.984827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.985235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.985526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.985716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.985893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.986347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.986696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.986936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.987143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.987297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.987330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.987485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.987697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.987727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.987899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.988184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.988212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.988419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.988662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.988704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.988851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.989253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.989650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.989847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.990039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.990314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.990377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.851 qpair failed and we were unable to recover it. 00:31:05.851 [2024-04-24 05:26:42.990538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.851 [2024-04-24 05:26:42.990716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.990750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.990918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.991195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.991248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.991564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.991804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.991833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.992021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.992260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.992321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.992525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.992727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.992756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.992920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.993244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.993293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.993487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.993689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.993718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.993857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.994200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.994478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.994672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.994846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.995304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.995765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.995997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.996137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.996310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.996338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.996500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.996646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.996676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.996844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.997242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.997574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.997798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.997920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.998329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.998702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.998945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.999156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.999350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.999378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.999543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.999777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:42.999804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:42.999950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.000410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.000775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.000945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.001223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.001590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.001650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.001855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.002070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.002120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.002285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.002438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.002463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.852 qpair failed and we were unable to recover it. 00:31:05.852 [2024-04-24 05:26:43.002665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.852 [2024-04-24 05:26:43.002858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.002886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.003219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.003522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.003573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.003793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.004264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.004716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.004908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.005145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.005452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.005502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.005668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.005836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.005865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.006038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.006192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.006236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.006400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.006564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.006593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.006734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9970 is same with the state(5) to be set 00:31:05.853 [2024-04-24 05:26:43.006943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.007148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.007180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.007391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.007558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.007617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.007795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.007997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.008027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.008222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.008421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.008447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.008623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.008775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.008801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.008995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.009414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.009785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.009945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.010088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.010252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.010281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.010548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.010743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.010773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.010972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.011332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.011664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.011888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.012050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.012271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.012323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.012508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.012663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.012691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.012869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.013290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.013657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.013837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.013997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.014148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.014190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.853 qpair failed and we were unable to recover it. 00:31:05.853 [2024-04-24 05:26:43.014383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.853 [2024-04-24 05:26:43.014555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.014585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.014771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.014947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.014990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.015145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.015330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.015356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.015531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.015699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.015735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.015907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.016321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.016643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.016864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.017032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.017391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.017767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.017994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.018129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.018272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.018302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.018491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.018658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.018688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.018859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.019247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.019608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.019859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.020012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.020337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.020761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.020951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.021121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.021327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.021356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.021546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.021712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.021741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.021910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.022268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.022580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.022815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.022990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.023366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.023736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.023946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.024111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.024299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.024329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.024527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.024731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.024760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.024898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.025079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.854 [2024-04-24 05:26:43.025108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.854 qpair failed and we were unable to recover it. 00:31:05.854 [2024-04-24 05:26:43.025271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.025406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.025434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.025602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.025821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.025851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.026037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.026390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.026713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.026891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.027066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.027262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.027291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.027424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.027588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.027634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.027831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.027982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.028008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.028152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.028309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.028335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.028456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.028657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.028687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.028861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.029262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.029619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.029824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.030024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.030354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.030715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.030925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.031083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.031263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.031289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.031431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.031640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.031669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.031849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.032322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.032686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.032884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.033024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.033229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.033258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.033505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.033729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.033759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.033964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.034117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.034143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.855 qpair failed and we were unable to recover it. 00:31:05.855 [2024-04-24 05:26:43.034262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.034412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.855 [2024-04-24 05:26:43.034438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.034656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.034826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.034855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.035049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.035215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.035244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.035404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.035565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.035595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.035818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.035993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.036024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.036188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.036329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.036359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.036523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.036678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.036708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.036884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.037206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.037584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.037786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.037955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.038319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.038688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.038901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.039093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.039391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.039420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.039583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.039782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.039812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.039977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.040378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.040927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.041119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.041293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.041322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.041504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.041691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.041718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.041846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.042267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.042569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.042795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.042970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.043362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.043767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.043999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.044125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.044291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.044320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.044511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.044703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.044732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.044868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.045046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.856 [2024-04-24 05:26:43.045072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.856 qpair failed and we were unable to recover it. 00:31:05.856 [2024-04-24 05:26:43.045281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.045447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.045476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.045687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.045906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.045932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.046123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.046273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.046318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.046526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.046683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.046710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.046904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.047282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.047713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.047917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.048105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.048239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.048270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.048470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.048638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.048667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.048824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.048995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.049036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.049215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.049352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.049380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.049531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.049702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.049727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.049932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.050310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.050731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.050931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.051127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.051291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.051320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.051512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.051672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.051702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.051873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.052240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.052576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.052779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.052982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.053201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.053243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.053430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.053604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.053640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.053818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.054210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.054598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.054814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.055017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.055364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.055714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.055911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.857 qpair failed and we were unable to recover it. 00:31:05.857 [2024-04-24 05:26:43.056103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.056299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.857 [2024-04-24 05:26:43.056325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.056499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.056703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.056732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.056930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.057317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.057657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.057810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.058013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.058374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.058755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.058932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.059075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.059418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.059793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.059996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.060183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.060376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.060404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.060598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.060761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.060790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.060982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.061315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.061699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.061869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.062059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.062257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.062286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.062445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.062648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.062678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.062848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.063233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.063625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.063799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.063966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.064289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.064701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.064930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.065080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.065299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.065328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.065499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.065694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.065724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.065858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.066231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.858 [2024-04-24 05:26:43.066634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.858 [2024-04-24 05:26:43.066852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.858 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.067042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.067395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.067780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.067962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.068090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.068400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.068727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.068925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.069098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.069426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.069779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.069963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.070111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.070288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.070317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.070480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.070651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.070681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.070849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.071232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.071619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.071821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.071991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.072380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.072743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.072937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.073136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.073335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.073364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.073553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.073717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.073747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.073913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.074281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.074637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.074843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.075030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.075395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.859 [2024-04-24 05:26:43.075746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.859 [2024-04-24 05:26:43.075966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.859 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.076157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.076321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.076354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.076524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.076732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.076762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.076907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.077272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.077677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.077872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.078063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.078251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.078279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.078419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.078610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.078655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.078853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.079242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.079654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.079828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.080000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.080331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.080689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.080882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.081055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.081382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.081720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.081944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.082119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.082446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.082751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.082956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.083075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.083436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.083801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.083976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.084200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.084326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.084353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.084503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.084650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.084680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.084862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.085196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.085510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.085686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.085831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.086011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.086049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.860 qpair failed and we were unable to recover it. 00:31:05.860 [2024-04-24 05:26:43.086195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.860 [2024-04-24 05:26:43.086391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.086428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.086603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.086818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.086861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.087074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.087432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.087795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.087983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.088133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.088306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.088344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.088494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.088671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.088710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.088933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.089300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.089701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.089885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.090056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.090234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.090264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.090432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.090601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.090646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.090835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.091225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.091582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.091743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.091866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.092165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.092503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.092701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.092873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.093249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.093635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.861 [2024-04-24 05:26:43.093798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:05.861 qpair failed and we were unable to recover it. 00:31:05.861 [2024-04-24 05:26:43.093929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.094124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.094156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.094368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.094541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.094579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.094826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.095245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.095534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.095831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.095981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.096136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.096291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.096329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.096515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.096672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.096708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.096887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.097298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.097613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.097800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.097976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.098379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.098774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.098987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.099143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.099550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.099845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.099999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.100181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.100329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.100358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.100506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.100674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.100704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.100871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.101166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.101505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.101815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.101965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.102116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.102292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.102318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.102466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.102698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.102724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.102858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.102982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.103007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.103189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.103527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.138 [2024-04-24 05:26:43.103580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.138 qpair failed and we were unable to recover it. 00:31:06.138 [2024-04-24 05:26:43.103787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.103940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.103965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.104134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.104259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.104286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.104489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.104683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.104710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.104888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.105211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.105521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.105739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.105882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.106206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.106583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.106865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.106990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.107016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.107247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.107464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.107490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.107639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.107774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.107801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.107982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.108254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.108589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.108776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.108957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.109242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.109603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.109762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.109914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.110321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.110658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.110842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.111034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.111392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.111761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.111981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.112131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.112351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.112405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.112599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.112757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.112784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.112907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.113048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.113074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.113248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.113367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.113393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.139 qpair failed and we were unable to recover it. 00:31:06.139 [2024-04-24 05:26:43.113530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.139 [2024-04-24 05:26:43.113698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.113727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.113874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.114212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.114494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.114813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.114966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.115115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.115266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.115308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.115556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.115720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.115747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.115909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.116329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.116667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.116899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.117135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.117308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.117334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.117486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.117619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.117650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.117823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.118285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.118605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.118799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.118974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.119251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.119594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.119791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.120021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.120317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.120648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.120818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.120946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.121268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.121640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.121790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.122042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.122355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.122409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.122537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.122687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.122714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.122842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.123264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.123597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.123802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.140 qpair failed and we were unable to recover it. 00:31:06.140 [2024-04-24 05:26:43.123973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.140 [2024-04-24 05:26:43.124096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.124121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.124266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.124389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.124415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.124596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.124781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.124808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.124932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.125270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.125642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.125846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.125993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.126295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.126623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.126796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.126946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.127265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.127674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.127931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.128116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.128442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.128758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.128951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.129122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.129285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.129310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.129428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.129575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.129617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.129811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.129984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.130163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.130499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.130782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.130959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.131108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.131240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.131266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.131416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.131568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.131594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.131751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.131991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.132171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.132494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.132825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.132978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.133004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.133149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.133302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.133329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.133546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.133678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.141 [2024-04-24 05:26:43.133709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.141 qpair failed and we were unable to recover it. 00:31:06.141 [2024-04-24 05:26:43.133941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.134295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.134659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.134837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.135011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.135318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.135616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.135820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.135987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.136336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.136669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.136929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.137055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.137367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.137725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.137879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.138035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.138407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.138700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.138858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.139026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.139372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.139739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.139937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.140086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.140381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.140735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.140931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.141070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.141197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.141222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.141370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.141493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.141518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.142 qpair failed and we were unable to recover it. 00:31:06.142 [2024-04-24 05:26:43.141669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-24 05:26:43.141797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.141823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.142073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.142440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.142771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.142951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.143101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.143426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.143710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.143860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.144096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.144390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.144728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.144929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.145053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.145422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.145796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.145972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.146150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.146268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.146294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.146481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.146657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.146687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.146875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.147234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.147551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.147699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.147872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.148199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.148493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.148800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.148978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.149129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.149472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.149757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.149961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.150115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.150265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.150290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.150522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.150677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.150704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.150884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.151032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.151059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.143 qpair failed and we were unable to recover it. 00:31:06.143 [2024-04-24 05:26:43.151190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.143 [2024-04-24 05:26:43.151346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.151372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.151522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.151677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.151704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.151832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.151977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.152152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.152475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.152794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.152947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.153123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.153443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.153768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.153935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.154092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.154410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.154709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.154889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.155065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.155410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.155730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.155885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.156065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.156426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.156780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.156963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.157140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.157257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.157282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.157461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.157608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.157639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.157822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.157990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.158191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.158503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.158829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.158979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.159153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.159476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.159809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.159974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.160112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.160349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.160374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.160503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.160651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.144 [2024-04-24 05:26:43.160678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.144 qpair failed and we were unable to recover it. 00:31:06.144 [2024-04-24 05:26:43.160832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.160990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.161016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.161166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.161347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.161372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.161528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.161682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.161708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.161858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.162205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.162537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.162734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.162933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.163334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.163641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.163853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.163998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.164348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.164718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.164896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.165025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.165370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.165750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.165903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.166050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.166310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.166586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.166776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.166903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.167236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.167654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.167838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.167992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.168341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.168685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.168859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.169007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.169308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.169645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.169793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.169944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.170083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.170110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.145 qpair failed and we were unable to recover it. 00:31:06.145 [2024-04-24 05:26:43.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.145 [2024-04-24 05:26:43.170485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.170511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.170666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.170794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.170820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.170946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.171260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.171561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.171762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.171911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.172173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.172476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.172789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.172994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.173181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.173357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.173382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.173511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.173666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.173708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.173877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.174214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.174601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.174784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.174911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.175241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.175585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.175775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.175960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.176283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.176696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.176897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.177048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.177373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.177707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.177888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.178016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.178361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.178660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.178835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.178989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.179322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.179636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.179784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.146 qpair failed and we were unable to recover it. 00:31:06.146 [2024-04-24 05:26:43.179916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.146 [2024-04-24 05:26:43.180068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.180095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.180279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.180477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.180503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.180636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.180789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.180815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.180960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.181310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.181619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.181773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.182222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.182522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.182860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.182989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.183014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.183138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.183304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.183333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.183498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.183689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.183718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.183858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.184186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.184518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.184796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.184977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.185123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.185423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.185750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.185890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.186008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.186381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.186674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.186848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.186996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.187261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.187574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.147 [2024-04-24 05:26:43.187766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.147 qpair failed and we were unable to recover it. 00:31:06.147 [2024-04-24 05:26:43.187887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.188215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.188562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.188768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.188921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.189220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.189562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.189754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.189878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.190196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.190498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.190847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.190998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.191169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.191478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.191835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.191981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.192103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.192402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.192743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.192895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.193060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.193402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.193725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.193905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.194028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.194379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.194736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.194919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.195097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.195395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.195722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.195916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.196087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.196451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.196770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.196919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.197067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.197222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.148 [2024-04-24 05:26:43.197248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.148 qpair failed and we were unable to recover it. 00:31:06.148 [2024-04-24 05:26:43.197447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.197605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.197638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.197792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.197946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.197971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.198117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.198294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.198320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.198458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.198635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.198679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.198829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.199199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.199560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.199780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.199946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.200299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.200679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.200893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.201062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.201419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.201808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.201999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.202139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.202334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.202362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.202535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.202709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.202753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.202952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.203358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.203718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.203945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.204114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.204271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.204300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.204498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.204694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.204723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.204899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.205311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.205693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.205850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.206051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.206371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.206757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.206917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.207096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.207425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.149 [2024-04-24 05:26:43.207716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.149 [2024-04-24 05:26:43.207937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.149 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.208104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.208247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.208288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.208459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.208669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.208715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.208880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.209257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.209675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.209890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.210077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.210243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.210273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.210468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.210638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.210669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.210872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.211360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.211755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.211909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.212073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.212449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.212817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.212996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.213204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.213435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.213491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.213659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.213827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.213853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.214003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.214365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.214718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.214919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.215086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.215236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.215278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.215449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.215613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.215648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.215814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.216307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.216668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.216914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.217091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.217288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.217341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.217512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.217662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.217688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.217896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.218239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.218632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.218791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.150 [2024-04-24 05:26:43.218982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.219146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.150 [2024-04-24 05:26:43.219176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.150 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.219340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.219506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.219535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.219711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.219839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.219866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.220040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.220467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.220826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.220974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.221123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.221377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.221435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.221609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.221804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.221830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.221982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.222433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.222798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.222970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.223111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.223239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.223266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.223450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.223597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.223623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.223845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.224354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.224682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.224850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.225045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.225410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.225766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.225958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.226165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.226355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.226383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.226538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.226721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.226763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.226949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.227364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.227698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.227894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.228084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.228240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.228269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.228442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.228569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.228594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.151 qpair failed and we were unable to recover it. 00:31:06.151 [2024-04-24 05:26:43.228747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.151 [2024-04-24 05:26:43.228905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.228931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.229106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.229298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.229327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.229494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.229668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.229695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.229851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.230151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.230575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.230736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.230887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.231211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.231595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.231833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.231978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.232286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.232606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.232820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.232992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.233230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.233260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.233452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.233612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.233648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.233820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.234204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.234530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.234740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.234907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.235301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.235648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.235856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.235972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.236340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.236647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.236811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.237005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.237202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.237255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.237446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.237603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.237633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.237810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.238258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.238696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.238854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.152 qpair failed and we were unable to recover it. 00:31:06.152 [2024-04-24 05:26:43.239035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.239232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.152 [2024-04-24 05:26:43.239282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.239473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.239672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.239729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.239887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.240205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.240561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.240759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.240961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.241264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.241637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.241858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.242025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.242179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.242204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.242362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.242531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.242565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.242734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.242971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.243022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.243195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.243449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.243506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.243651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.243822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.243852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.244041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.244265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.244292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.244484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.244708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.244737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.244927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.245296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.245667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.245821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.245975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.246212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.246261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.246426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.246595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.246626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.246791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.246973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.247035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.247226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.247364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.247392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.247560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.247754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.247784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.247961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.248250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.248544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.248749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.248925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.249230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.249768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.153 qpair failed and we were unable to recover it. 00:31:06.153 [2024-04-24 05:26:43.249912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.153 [2024-04-24 05:26:43.250042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.250087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.250282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.250447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.250476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.250625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.250777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.250804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.250958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.251362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.251706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.251896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.252064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.252422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.252810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.252986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.253138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.253286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.253329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.253520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.253742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.253794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.253994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.254304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.254668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.254889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.255019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.255408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.255728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.255875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.256073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.256343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.256395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.256541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.256697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.256723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.256914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.257159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.257184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.257361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.257536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.257565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.257774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.257946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.258016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.258168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.258345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.258371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.258541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.258715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.258741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.258870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.259231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.259654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.259854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.260023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.260165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.260207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.260371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.260566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.154 [2024-04-24 05:26:43.260594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.154 qpair failed and we were unable to recover it. 00:31:06.154 [2024-04-24 05:26:43.260773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.260899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.260925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.261076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.261226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.261252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.261450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.261615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.261651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.261851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.261994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.262022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.262213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.262342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.262369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.262521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.262673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.262717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.262887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.263352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.263714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.263908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.264077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.264253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.264282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.264457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.264655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.264685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.264840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.265225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.265600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.265786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.265963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.266283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.266705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.266925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.267089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.267255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.267283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.267448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.267649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.267679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.267851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.268238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.268597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.268774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.268968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.269294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.269638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.269828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.270001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.270328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.270679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.270873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.155 [2024-04-24 05:26:43.271023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.271175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.155 [2024-04-24 05:26:43.271218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.155 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.271407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.271597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.271625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.271797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.271937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.271965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.272141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.272337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.272366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.272540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.272688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.272715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.272885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.273272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.273602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.273813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.273977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.274385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.274724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.274928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.275144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.275273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.275299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.275454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.275624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.275669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.275826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.275978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.276004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.276158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.276358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.276401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.276601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.276755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.276782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.276907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.277308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.277693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.277905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.278079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.278302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.278344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.278492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.278694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.278737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.278921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.279332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.279658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.279867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.280071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.280258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.280301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.156 qpair failed and we were unable to recover it. 00:31:06.156 [2024-04-24 05:26:43.280452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.156 [2024-04-24 05:26:43.280597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.280623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.280808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.280994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.281039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.281210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.281429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.281458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.281625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.281754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.281780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.281982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.282167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.282210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.282358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.282528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.282554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.282728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.282957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.283000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.283201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.283494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.283544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.283715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.283910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.283954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.284165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.284361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.284405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.284583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.284752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.284796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.285000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.285414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.285755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.285986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.286187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.286375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.286418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.286542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.286716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.286763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.286913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.287316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.287642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.287873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.288064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.288232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.288275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.288429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.288608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.288644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.288781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.288977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.289020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.289172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.289356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.289385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.289587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.289752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.289780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.289927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.290309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.290702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.290929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.291103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.291291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.291335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.291509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.291684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.157 [2024-04-24 05:26:43.291713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.157 qpair failed and we were unable to recover it. 00:31:06.157 [2024-04-24 05:26:43.291962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.292344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.292649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.292860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.293028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.293378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.293761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.293994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.294196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.294340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.294365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.294541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.294704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.294753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.294929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.295369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.295704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.295920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.296092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.296293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.296319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.296499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.296695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.296739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.296946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.297325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.297652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.297869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.298042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.298421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.298735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.298934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.299135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.299325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.299351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.299510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.299678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.299713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.299894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.300353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.300696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.300913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.301079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.301270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.301312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.301487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.301652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.301695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.301871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.302280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.158 [2024-04-24 05:26:43.302606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.158 [2024-04-24 05:26:43.302852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.158 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.303039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.303289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.303332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.303453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.303615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.303649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.303846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.304213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.304615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.304818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.305032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.305247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.305291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.305518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.305686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.305716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.305895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.306297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.306723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.306921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.307109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.307289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.307332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.307536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.307684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.307713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.307899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.308307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.308653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.308914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.309181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.309437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.309462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.309626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.309813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.309855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.310032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.310222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.310266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.310503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.310656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.310694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.310898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.311336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.311690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.311921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.312122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.312317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.312361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.312533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.312700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.312744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.312925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.313337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.313727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.313938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.314168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.314365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.314407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.159 [2024-04-24 05:26:43.314576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.314750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.159 [2024-04-24 05:26:43.314778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.159 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.314955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.315148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.315193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.315368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.315580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.315606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.315821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.316221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.316665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.316967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.317142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.317430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.317480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.317657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.317885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.317930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.318159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.318357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.318402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.318580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.318752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.318796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.319010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.319198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.319241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.319435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.319614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.319645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.319828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.320263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.320728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.320914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.321113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.321332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.321379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.321534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.321729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.321772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.321950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.322321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.322686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.322955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.323157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.323317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.323361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.323540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.323713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.323757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.323956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.324362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.324717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.324946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.160 [2024-04-24 05:26:43.325122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.325318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.160 [2024-04-24 05:26:43.325361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.160 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.325496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.325687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.325717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.325911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.326342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.326754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.326995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.327173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.327346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.327373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.327547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.327733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.327763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.327979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.328389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.328765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.328999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.329197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.329387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.329430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.329586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.329767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.329797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.329980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.330148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.330193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.330369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.330540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.330567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.330767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.330959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.331003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.331173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.331333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.331378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.331555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.331733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.331776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.331942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.332151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.332179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.332355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.332558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.332584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.332797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.332964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.333007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.333181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.333383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.333426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.333608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.333797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.333840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.333984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.334385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.334712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.334985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.335140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.335474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.335798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.335955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.336093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.336244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.336272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.161 qpair failed and we were unable to recover it. 00:31:06.161 [2024-04-24 05:26:43.336409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.336536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.161 [2024-04-24 05:26:43.336564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.336717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.336910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.336955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.337134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.337306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.337334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.337489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.337695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.337723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.337904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.338279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.338580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.338799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.338999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.339217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.339264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.339424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.339607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.339771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.339959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.340004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.340216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.340409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.340439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.340622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.340785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.340812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.340985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.341222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.341275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.341556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.341748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.341776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.341934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.342271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.342603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.342822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.343023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.343337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.343740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.343896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.344063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.344427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.344776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.344944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.345095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.345435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.345790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.345971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.346148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.346317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.346347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.346546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.346695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.346723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.162 [2024-04-24 05:26:43.346848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.346973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.162 [2024-04-24 05:26:43.347017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.162 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.347179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.347311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.347341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.347505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.347679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.347711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.347890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.348206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.348587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.348769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.348968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.349320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.349684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.349861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.350053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.350405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.350743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.350941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.351081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.351249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.351290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.351471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.351647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.351675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.351860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.352172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.352224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.352431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.352621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.352657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.352827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.353248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.353590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.353789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.353922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.354272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.354608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.354769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.354920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.355288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.355648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.355824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.355951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.356325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.356673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.356829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.357017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.357167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.357194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.357352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.357519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.163 [2024-04-24 05:26:43.357548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.163 qpair failed and we were unable to recover it. 00:31:06.163 [2024-04-24 05:26:43.357717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.357842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.357868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.358021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.358366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.358704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.358924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.359127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.359284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.359311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.359465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.359607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.359658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.359814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.360197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.360527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.360728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.360908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.361281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.361580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.361755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.361886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.362183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.362514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.362856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.362998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.363140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.363294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.363338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.363475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.363614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.363648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.363819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.364228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.364579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.364727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.364853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.365236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.365683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.365863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.366018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.366384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.366679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.366879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.367055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.367208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.367253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.367388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.367551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.164 [2024-04-24 05:26:43.367580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.164 qpair failed and we were unable to recover it. 00:31:06.164 [2024-04-24 05:26:43.367728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.367899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.367925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.368050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.368378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.368749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.368898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.369075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.369195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.369227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.369413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.369612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.369649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.369818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.369998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.370025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.370190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.370318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.370346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.370518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.370664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.370710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.370868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.371199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.371590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.371767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.371920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.372253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.372567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.372748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.372916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.373299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.373601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.373809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.373965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.374311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.374656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.374860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.375036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.375356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.375674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.375851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.165 [2024-04-24 05:26:43.375977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.376106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.165 [2024-04-24 05:26:43.376132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.165 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.376257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.376408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.376434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.376582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.376740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.376766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.376894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.377232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.377583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.377785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.377945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.378330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.378680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.378834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.378959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.379317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.379689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.379885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.380087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.380374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.380679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.380845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.380988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.381364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.381726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.381907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.382063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.382360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.382719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.382871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.383048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.383447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.383775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.383980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.384136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.384285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.384312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.384434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.384587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.384634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.384798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.385175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.385476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.166 qpair failed and we were unable to recover it. 00:31:06.166 [2024-04-24 05:26:43.385824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.385980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.166 [2024-04-24 05:26:43.386022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.386211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.386346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.386375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.386547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.386712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.386745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.386871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.387170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.387470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.387653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.387804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.388188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.388579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.388752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.388906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.389062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.389090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.389241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.389435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.167 [2024-04-24 05:26:43.389473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.167 qpair failed and we were unable to recover it. 00:31:06.167 [2024-04-24 05:26:43.389644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.389818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.389845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.445 [2024-04-24 05:26:43.389978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.445 [2024-04-24 05:26:43.390281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.445 [2024-04-24 05:26:43.390563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.390750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.445 [2024-04-24 05:26:43.390919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.391099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.391126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.445 [2024-04-24 05:26:43.391251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.391402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.445 [2024-04-24 05:26:43.391428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.445 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.391570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.391764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.391790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.391917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.392205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.392549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.392753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.392935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.393273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.393561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.393735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.393883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.394195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.394493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.394787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.394993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.395176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.395343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.395387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.395516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.395678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.395705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.395859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.395983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.396162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.396507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.396833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.396991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.397019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.397190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.397364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.397390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.397561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.397744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.397770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.397942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.398301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.398844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.398999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.399316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.399686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.399856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.399996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.400333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.400693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.400893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.446 [2024-04-24 05:26:43.401017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.401191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.446 [2024-04-24 05:26:43.401220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.446 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.401412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.401639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.401676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.401799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.401974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.402151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.402427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.402740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.402923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.403070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.403425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.403754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.403909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.404061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.404341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.404689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.404878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.405032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.405377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.405704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.405860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.406014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.406352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.406698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.406854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.407051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.407362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.407655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.407849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.408013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.408414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.408825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.408978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.409132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.409470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.409750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.409974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.410110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.410276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.410305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.410507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.410705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.410733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.447 qpair failed and we were unable to recover it. 00:31:06.447 [2024-04-24 05:26:43.410919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.447 [2024-04-24 05:26:43.413816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.413862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.414038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.414373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.414674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.414852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.415030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.415391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.415784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.415961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.416118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.416281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.416310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.416504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.416680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.416707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.416859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.417247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.417577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.417778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.417928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.418231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.418591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.418812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.418989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.419321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.419750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.419919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.420119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.420484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.420793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.420947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.421097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.421247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.421291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.421460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.421607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.421642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.421807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.422240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.422696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.422844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.423044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.423196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.423245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.423416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.423610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.423646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.423836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.424027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.424078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.448 qpair failed and we were unable to recover it. 00:31:06.448 [2024-04-24 05:26:43.424228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.448 [2024-04-24 05:26:43.424380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.424406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.424557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.424737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.424787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.424949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.425298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.425658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.425807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.425979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.426285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.426809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.426948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.427354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.427684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.427835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.427965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.428332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.428689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.428871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.429028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.429436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.429766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.429941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.430092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.430384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.430694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.430867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.430991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.431363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.431663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.431850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.431971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.432283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.432645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.432842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.432986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.433336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.433700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.433882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.449 qpair failed and we were unable to recover it. 00:31:06.449 [2024-04-24 05:26:43.434004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.434158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.449 [2024-04-24 05:26:43.434184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.434309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.434488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.434515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.434661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.434781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.434807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.434937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.435243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.435597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.435774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.435985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.436360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.436768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.436956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.437086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.437454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.437807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.437990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.438177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.438352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.438378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.438535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.438661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.438689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.438812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.438979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.439005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.439191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.439334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.439363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.439560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.439745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.439776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.439915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.440250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.440610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.440810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.440955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.441287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.441691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.441874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.442032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.442194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.442224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.450 qpair failed and we were unable to recover it. 00:31:06.450 [2024-04-24 05:26:43.442391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.450 [2024-04-24 05:26:43.442551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.442580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.442748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.442922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.442951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.443104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.443394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.443739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.443971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.444125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.444442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.444800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.444962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.445132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.445439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.445753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.445909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.446058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.446434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.446740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.446919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.447097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.447395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.447770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.447915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.448040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.448431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.448705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.448877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.449108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.449239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.449267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.449393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.449599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.449726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.449885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.450252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.450529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.450834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.450995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.451025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.451206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.451360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.451403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.451562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.451726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.451756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.451946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.452073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.452099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.451 qpair failed and we were unable to recover it. 00:31:06.451 [2024-04-24 05:26:43.452228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.451 [2024-04-24 05:26:43.452351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.452377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.452556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.452746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.452776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.452944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.453313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.453696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.453870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.453990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.454311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.454619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.454776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.454959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.455365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.455696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.455868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.456024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.456386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.456719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.456932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.457073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.457404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.457743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.457921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.458096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.458256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.458285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.458462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.458608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.458659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.458855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.459191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.459494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.459663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.459829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.460179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.460555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.460757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.460921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.461303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.461625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.461799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.452 [2024-04-24 05:26:43.461920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.462040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.452 [2024-04-24 05:26:43.462065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.452 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.462182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.462357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.462382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.462566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.462718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.462762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.462940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.463238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.463519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.463660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.463840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.464195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.464516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.464722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.464876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.465219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.465541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.465755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.465870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.466166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.466561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.466768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.466922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.467257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.467609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.467781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.467965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.468319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.468650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.468818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.468956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.469283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.469588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.469793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.469976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.470308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.470651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.470833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.471007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.471310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.453 [2024-04-24 05:26:43.471706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-04-24 05:26:43.471883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.453 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.472024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.472401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.472710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.472885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.473060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.473400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.473704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.473847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.474045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.474404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.474793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.474975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.475126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.475422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.475807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.475973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.476147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.476305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.476334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.476472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.476644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.476675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.476872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.477187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.477519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.477854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.477974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.478001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.478146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.478298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.478324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.478499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.478652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.478680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.478822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.479243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.479595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.479765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.479931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.480053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.480078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.480253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.480418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-04-24 05:26:43.480451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.454 qpair failed and we were unable to recover it. 00:31:06.454 [2024-04-24 05:26:43.480642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.480810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.480839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.481016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.481342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.481743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.481943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.482109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.482283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.482312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.482446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.482589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.482617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.482839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.482988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.483014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.483174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.483325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.483351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.483525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.483689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.483719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.483874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.484262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.484618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.484794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.484948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.485255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.485551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.485817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.485994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.486115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.486414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.486694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.486896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.487067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.487432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.487750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.487968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.488140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.488289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.488315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.488455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.488636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.488678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.488839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.488998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.489027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.489199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.489354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.489397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.489553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.489713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.489743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.455 qpair failed and we were unable to recover it. 00:31:06.455 [2024-04-24 05:26:43.489889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.490042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.455 [2024-04-24 05:26:43.490069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.490220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.490347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.490373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.490561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.490717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.490745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.490894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.491216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.491516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.491759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.491884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.492221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.492612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.492808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.492971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.493354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.493695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.493843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.493969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.494338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.494899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.495018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.495311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.495653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.495847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.496038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.496257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.496307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.496502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.496670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.496699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.496891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.497216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.497579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.497781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.497902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.498300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.498623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.498789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.498915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.499334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.499689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.499895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.456 qpair failed and we were unable to recover it. 00:31:06.456 [2024-04-24 05:26:43.500071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.456 [2024-04-24 05:26:43.500247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.500273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.500452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.500597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.500623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.500759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.500909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.500935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.501086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.501461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.501817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.501999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.502128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.502287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.502313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.502470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.502654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.502684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.502829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.502981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.503007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.503207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.503413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.503439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.503613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.503775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.503802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.503925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.504291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.504657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.504858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.504977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.505324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.505666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.505867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.506040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.506421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.506772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.506949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.507103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.507251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.507277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.507494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.507652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.507681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.507831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.508221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.508554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.508869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.508985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.457 [2024-04-24 05:26:43.509027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.457 qpair failed and we were unable to recover it. 00:31:06.457 [2024-04-24 05:26:43.509174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.509331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.509357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.509511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.509667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.509693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.509844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.510230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.510572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.510748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.510927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.511193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.511567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.511755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.511921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.512243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.512587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.512795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.512919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.513221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.513548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.513759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.513991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.514369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.514727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.514897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.515088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.515446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.515800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.515994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.516168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.516322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.516349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.516500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.516640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.516667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.516842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.517285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.517634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.517840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.517993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.518373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.458 [2024-04-24 05:26:43.518774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.458 [2024-04-24 05:26:43.518926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.458 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.519106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.519462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.519832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.519998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.520160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.520346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.520393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.520536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.520673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.520703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.520878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.521228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.521614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.521805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.521980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.522302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.522631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.522858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.523009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.523352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.523790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.523966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.524117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.524418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.524793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.524943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.525095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.525249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.525275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.525450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.525659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.525688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.525836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.525992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.526162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.526456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.526774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.526970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.527172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.527294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.527337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.459 qpair failed and we were unable to recover it. 00:31:06.459 [2024-04-24 05:26:43.527528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.459 [2024-04-24 05:26:43.527680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.527706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.527857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.527986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.528012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.528135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.528291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.528316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.528489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.528651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.528680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.528857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.529233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.529579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.529762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.529942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.530399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.530697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.530902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.531035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.531378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.531735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.531899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.532066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.532256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.532286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.532438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.532589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.532616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.532809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.532974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.533000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.533177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.533337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.533372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.533615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.533764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.533790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.533965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.534341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.534750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.534927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.535104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.535285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.535314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.535511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.535683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.535713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.535878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.536212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.536574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.536749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.536870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.537269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.537567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.537759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.460 qpair failed and we were unable to recover it. 00:31:06.460 [2024-04-24 05:26:43.537929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.460 [2024-04-24 05:26:43.538103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.538129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.538248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.538381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.538407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.538550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.538760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.538789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.538937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.539265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.539593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.539779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.539911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.540261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.540604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.540811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.540988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.541390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.541721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.541916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.542064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.542240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.542265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.542412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.542581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.542610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.542864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.543251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.543549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.543724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.543958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.544379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.544733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.544913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.545047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.545315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.545659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.545833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.546003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.546358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.546657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.546837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.547010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.547381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.461 [2024-04-24 05:26:43.547774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.461 [2024-04-24 05:26:43.547947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.461 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.548126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.548275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.548301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.548532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.548681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.548711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.548913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.549247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.549577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.549735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.549886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.550252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.550626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.550889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.551018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.551390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.551723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.551877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.552014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.552329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.552635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.552791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.552998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.553351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.553745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.553920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.554037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.554335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.554669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.554854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.555048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.555392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.555733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.555929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.556050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.556354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.556802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.556995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.557159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.557322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.557348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.557524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.557703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.557730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.462 [2024-04-24 05:26:43.557878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.558024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.462 [2024-04-24 05:26:43.558049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.462 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.558204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.558365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.558394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.558556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.558722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.558753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.558927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.559291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.559616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.559846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.559988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.560288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.560677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.560867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.561038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.561393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.561788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.561979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.562104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.562421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.562797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.562968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.563117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.563262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.563288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.563472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.563620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.563667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.563831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.564229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.564617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.564789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.564966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.565291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.565582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.565769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.565916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.566294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.566653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.566830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.566957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.567254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.567611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.567788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.463 qpair failed and we were unable to recover it. 00:31:06.463 [2024-04-24 05:26:43.567926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.463 [2024-04-24 05:26:43.568121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.568149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.568319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.568450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.568476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.568619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.568797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.568823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.568980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.569382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.569805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.569982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.570136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.570282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.570308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.570458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.570586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.570612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.570795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.570993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.571022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.571204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.571431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.571457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.571623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.571774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.571800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.571923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.572237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.572601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.572760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.572945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.573286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.573649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.573792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.573956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.574346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.574721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.574876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.575108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.575354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.575380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.575527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.575691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.575720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.575898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.576249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.576580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.576788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.464 qpair failed and we were unable to recover it. 00:31:06.464 [2024-04-24 05:26:43.576938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.464 [2024-04-24 05:26:43.577090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.577120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.577276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.577421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.577446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.577625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.577766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.577795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.577948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.578224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.578660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.578835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.578979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.579333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.579711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.579891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.580043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.580375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.580750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.580964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.581151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.581299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.581326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.581484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.581615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.581646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.581880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.582260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.582616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.582825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.582981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.583390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.583765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.583927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.584117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.584478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.584820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.584971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.585148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.585464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.585779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.585950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.586081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.586259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.586287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.465 qpair failed and we were unable to recover it. 00:31:06.465 [2024-04-24 05:26:43.586487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.465 [2024-04-24 05:26:43.586609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.586662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.586833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.586996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.587025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.587170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.587312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.587356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.587510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.587662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.587689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.587941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.588307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.588637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.588817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.588996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.589291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.589614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.589843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.590004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.590351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.590727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.590912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.591067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.591447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.591818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.591993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.592162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.592329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.592358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.592493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.592663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.592693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.592864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.593250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.593581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.593795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.593934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.594273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.594665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.594825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.594988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.595136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.466 [2024-04-24 05:26:43.595169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.466 qpair failed and we were unable to recover it. 00:31:06.466 [2024-04-24 05:26:43.595345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.595468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.595494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.595687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.595833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.595863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.596015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.596701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.596893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.597064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.597407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.597739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.597886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.598041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.598308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.598714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.598876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.599026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.599339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.599704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.599898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.600076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.600234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.600260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.600412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.600593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.600622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.600794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.600991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.601220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.601531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.601810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.601973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.602125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.602450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.602809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.602971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.603121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.603451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.603824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.603989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.604143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.604295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.604321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.604471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.604663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.604705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.604826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.604994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.605037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.467 qpair failed and we were unable to recover it. 00:31:06.467 [2024-04-24 05:26:43.605169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.467 [2024-04-24 05:26:43.605322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.605348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.605472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.605640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.605669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.605834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.606176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.606497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.606705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.606873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.607237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.607593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.607772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.607921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.608281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.608619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.608792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.608924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.609299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.609592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.609824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.609949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.610242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.610621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.610797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.610987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.611311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.611652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.611844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.611958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.612355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.612689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.612925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.613051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.613216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.613245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.613440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.613603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.613638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.613818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.613976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.614002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.614167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.614346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.614373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.614549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.614719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.614747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.614937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.615097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.615126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.468 qpair failed and we were unable to recover it. 00:31:06.468 [2024-04-24 05:26:43.615296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.468 [2024-04-24 05:26:43.615466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.615495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.615660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.615815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.615842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.616003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.616311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.616677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.616856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.617043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.617445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.617753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.617923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.618129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.618471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.618807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.618959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.619147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.619325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.619354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.619521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.619695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.619725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.619931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.620293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.620650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.620845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.621022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.621364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.621718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.621867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.622020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.622419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.622732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.622900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.623020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.623351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.623709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.623905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.469 qpair failed and we were unable to recover it. 00:31:06.469 [2024-04-24 05:26:43.624061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.624242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.469 [2024-04-24 05:26:43.624268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.624440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.624641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.624670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.624837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.624975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.625152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.625442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.625720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.625898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.626079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.626436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.626795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.626976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.627123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.627421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.627762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.627955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.628129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.628459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.628778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.628973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.629123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.629270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.629296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.629506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.629659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.629687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.629834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.630213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.630499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.630818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.630993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.631023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.631217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.631338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.631364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.631520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.631645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.631673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.631876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.632218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.632576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.632868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.632993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.633020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.633197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.633397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.633426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.633586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.633747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.633780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.470 [2024-04-24 05:26:43.633949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.634134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.470 [2024-04-24 05:26:43.634176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.470 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.634328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.634503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.634545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.634735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.634877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.634905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.635098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.635242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.635271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.635459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.635659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.635688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.635864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.636232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.636572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.636792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.636939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.637323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.637686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.637824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.637966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.638295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.638693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.638888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.639036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.639347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.639722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.639898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.640105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.640312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.640363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.640556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.640722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.640752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.640935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.641283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.641675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.641912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.642064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.642376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.642776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.642941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.643120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.643326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.643365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.643537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.643718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.643744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.643874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.644208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.644562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.471 [2024-04-24 05:26:43.644746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.471 qpair failed and we were unable to recover it. 00:31:06.471 [2024-04-24 05:26:43.644872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.645237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.645593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.645820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.645990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.646296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.646675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.646898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.647091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.647462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.647821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.647996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.648121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.648289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.648317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.648481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.648643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.648687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.648829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.649217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.649614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.649816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.650006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.650427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.650786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.650939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.651142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.651375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.651428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.651595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.651787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.651816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.651976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.652331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.652761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.652969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.653141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.653291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.653333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.653501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.653677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.653706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.653864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.654249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.654624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.654834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.655220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.655268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.472 [2024-04-24 05:26:43.655463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.655589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.472 [2024-04-24 05:26:43.655640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.472 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.655828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.656174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.656580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.656782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.656971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.657202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.657257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.657420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.657584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.657613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.657828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.658258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.658682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.658887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.659061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.659213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.659256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.659428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.659589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.659618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.659845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.660351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.660759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.660929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.661098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.661263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.661292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.661460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.661639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.661693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.661861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.662255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.662736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.662960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.663136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.663432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.663799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.663971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.664138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.664335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.664388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.664563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.664743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.664769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.664895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.665056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.665082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.665234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.665380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.665405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.473 qpair failed and we were unable to recover it. 00:31:06.473 [2024-04-24 05:26:43.665571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.473 [2024-04-24 05:26:43.665766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.665792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.665953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.666437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.666786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.666995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.667126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.667246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.667273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.667401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.667551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.667579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.667784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.667979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.668007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.668181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.668336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.668363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.668569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.668772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.668799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.668971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.669262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.669313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.669484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.669654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.669683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.669848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.670234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.670596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.670795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.670967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.671291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.671692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.671847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.672019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.672383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.672746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.672928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.673102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.673263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.673292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.673457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.673623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.673659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.673854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.674248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.674659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.674858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.675010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.675186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.675223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.675388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.675531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.675559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.675761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.675989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.676041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.676192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.676374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.474 [2024-04-24 05:26:43.676417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.474 qpair failed and we were unable to recover it. 00:31:06.474 [2024-04-24 05:26:43.676556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.676695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.676724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.676889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.677325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.677705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.677861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.678018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.678251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.678305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.678502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.678684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.678714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.678846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.679277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.679668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.679822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.680005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.680361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.680709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.680883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.681061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.681288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.681344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.681544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.681736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.681765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.681962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.682131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.682180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.682349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.682527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.682552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.682735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.682975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.683034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.683209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.683363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.683388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.683542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.683699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.683742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.683890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.684247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.684621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.684852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.685037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.685224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.685271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.685440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.685640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.685669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.685840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.686162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.686521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.686698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.475 [2024-04-24 05:26:43.686853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.687013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.475 [2024-04-24 05:26:43.687042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.475 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.687208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.687357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.687387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.687588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.687791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.687820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.687991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.688322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.688737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.688926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.689116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.689307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.689354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.689489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.689667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.689695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.689873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.690360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.690752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.690952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.691299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.691325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.691479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.691626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.691681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.691886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.692315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.692685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.692896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.693044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.693442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.693791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.693981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.694134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.694772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.694950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.695144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.695313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.695362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.476 [2024-04-24 05:26:43.695534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.695686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.476 [2024-04-24 05:26:43.695741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.476 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.695918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.696292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.696579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.696738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.696878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.697245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.697538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.697740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.697929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.698289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.698637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.698810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.698987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.699344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.699653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.699850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.700005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.700281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.700660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.700858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.700994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.701163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.754 [2024-04-24 05:26:43.701191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.754 qpair failed and we were unable to recover it. 00:31:06.754 [2024-04-24 05:26:43.701356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.701549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.701577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.701751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.701882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.701907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.702077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.702265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.702292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.702460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.702634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.702663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.702866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.702992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.703036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.703195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.703385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.703413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.703616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.703772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.703798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.703975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.704348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.704736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.704936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.705112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.705298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.705352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.705513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.705706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.705735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.705924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.706233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.706610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.706819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.707021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.707279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.707350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.707556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.707752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.707781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.707915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.708245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.708609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.708805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.708997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.709351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.709713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.709880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.710019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.710382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.710712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.710892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.711046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.711245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.711274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.711441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.711606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.711643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.711791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.711963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-04-24 05:26:43.712005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.755 qpair failed and we were unable to recover it. 00:31:06.755 [2024-04-24 05:26:43.712165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.712418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.712469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.712615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.712787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.712816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.712966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.713384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.713708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.713921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.714090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.714242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.714284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.714468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.714660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.714688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.714894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.715215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.715534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.715745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.715935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.716191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.716246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.716453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.716620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.716655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.716803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.716997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.717025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.717221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.717387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.717441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.717620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.717769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.717812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.717981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.718158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.718184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.718313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.718517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.718546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.718745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.718983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.719034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.719234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.719475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.719525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.719684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.719874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.719902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.720068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.720387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.720794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.720970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.721100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.721242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.721284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.721447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.721602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.721636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.721835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.722206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.722655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.722886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.756 [2024-04-24 05:26:43.723068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.723319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.756 [2024-04-24 05:26:43.723370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.756 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.723551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.723726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.723752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.723903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.724273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.724672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.724860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.725000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.725405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.725785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.725983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.726177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.726363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.726391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.726556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.726726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.726755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.726929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.727292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.727700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.727887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.728042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.728219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.728244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.728447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.728624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.728656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.728822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.728978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.729006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.729207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.729384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.729412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.729582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.729756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.729783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.729942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.730311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.730641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.730874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.731042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.731231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.731259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.731446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.731616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.731653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.731830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.732300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.732666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.732850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.732993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.733310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.733684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.733873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.757 [2024-04-24 05:26:43.734066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.734229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.757 [2024-04-24 05:26:43.734258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.757 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.734421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.734612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.734647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.734810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.734957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.735000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.735213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.735391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.735434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.735598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.735747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.735776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.735955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.736331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.736698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.736891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.737036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.737332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.737673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.737844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.738030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.738335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.738725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.738904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.739076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.739251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.739312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.739511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.739641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.739667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.739823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.740233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.740591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.740795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.740961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.741380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.741656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.741870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.742044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.742403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.742701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.742911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.743049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.743222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.743248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.758 qpair failed and we were unable to recover it. 00:31:06.758 [2024-04-24 05:26:43.743412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.743560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.758 [2024-04-24 05:26:43.743586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.743746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.743940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.743969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.744149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.744297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.744323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.744472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.744620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.744651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.744850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.745222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.745669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.745845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.745993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.746403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.746806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.746996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.747188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.747360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.747389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.747564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.747724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.747753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.747922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.748257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.748636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.748806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.748982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.749323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.749742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.749943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.750095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.750219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.750246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.750419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.751352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.751386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.751581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.751765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.751794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.751992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.752248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.752306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.752451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.752644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.752674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.752820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.752992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.753023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.753217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.753393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.753419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.759 qpair failed and we were unable to recover it. 00:31:06.759 [2024-04-24 05:26:43.753605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.759 [2024-04-24 05:26:43.753769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.753798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.753943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.754273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.754642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.754847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.755043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.755404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.755800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.755968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.756722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.756899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.756928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.757121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.757285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.757314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.757515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.757724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.757751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.757904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.758281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.758606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.758817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.759016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.759787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.759817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.760003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.760270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.760324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.760492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.760657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.760686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.760838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.761198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.761656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.761868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.762047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.762191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.762234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.762436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.762639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.762670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.762828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.763021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.763079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.763282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.763445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.760 [2024-04-24 05:26:43.763473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.760 qpair failed and we were unable to recover it. 00:31:06.760 [2024-04-24 05:26:43.763641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.763787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.763818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.763958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.764433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.764787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.764990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.765211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.765420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.765451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.765618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.765805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.765835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.766021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.766488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.766809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.766998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.767160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.767333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.767373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.767596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.767805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.767842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.768021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.768292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.768335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.768563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.768750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.768777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.768937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.769166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.769229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.769465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.769645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.769675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.769847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.769991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.770029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.770250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.770457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.770485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.770645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.770788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.770814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.770964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.771133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.771171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.771365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.771583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.771613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.771823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.771960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.761 [2024-04-24 05:26:43.772005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.761 qpair failed and we were unable to recover it. 00:31:06.761 [2024-04-24 05:26:43.772205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.772365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.772406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.772577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.772703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.772731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.772891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.773266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.773651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.773859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.774033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.774375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.774756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.774936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.775113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.775292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.775334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.775526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.775705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.775732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.775852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.776025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.776054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.776228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.777021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.777054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.777234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.778250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.778619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.778850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.779052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.779221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.779255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.779414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.779682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.779729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.779939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.780302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.780661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.780831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.780974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.781379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.781734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.781879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.782009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.782184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.782209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.782379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.782546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.782570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.762 qpair failed and we were unable to recover it. 00:31:06.762 [2024-04-24 05:26:43.782724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.762 [2024-04-24 05:26:43.782899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.782942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.783100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.783248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.783273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.783482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.783654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.783681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.783837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.784205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.784581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.784793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.784968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.785316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.785657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.785862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.786001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.786392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.786816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.786970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.787113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.787235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.787259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.787410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.787575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.787602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.787855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.787984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.788009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.788208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.788367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.788394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.788569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.788720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.788746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.788925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.789279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.789676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.789898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.790037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.790402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.790800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.790992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.791186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.791355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.791542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.791688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.791713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.791863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.792169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.792547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.792841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.792984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.763 [2024-04-24 05:26:43.793009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.763 qpair failed and we were unable to recover it. 00:31:06.763 [2024-04-24 05:26:43.793163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.793311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.793336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.793485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.793666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.793692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.793854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.794160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.794481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.794809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.794955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.795110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.795375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.795687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.795830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.795955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.796280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.796601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.796794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.796944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.797261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.797604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.797763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.797918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.798258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.798575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.798741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.798892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.799225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.799566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.799744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.799895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.800236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.800529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.800736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.800880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.801010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.801034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.764 qpair failed and we were unable to recover it. 00:31:06.764 [2024-04-24 05:26:43.801183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.764 [2024-04-24 05:26:43.801332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.801358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.801532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.801652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.801678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.801826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.802003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.802152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.802302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.802326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.802472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.802657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.802683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.802830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.803155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.803468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.803770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.803969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.804089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.804408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.804727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.804930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.805084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.805256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.805280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.805428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.805575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.805599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.805825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.805995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.806171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.806444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.806758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.806934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.807086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.807407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.807764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.807906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.808088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.808412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.808751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.808892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.809064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.809438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.809754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.809897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.810053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.810177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.810201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.810378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.810530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.810555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.765 qpair failed and we were unable to recover it. 00:31:06.765 [2024-04-24 05:26:43.810678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.765 [2024-04-24 05:26:43.810828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.810853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.811012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.811328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.811709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.811877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.812062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.812336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.812638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.812828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.812976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.813268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.813617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.813811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.813996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.814283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.814605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.814816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.814936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.815263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.815524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.815856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.815976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.816152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.816476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.816791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.816960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.817085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.817378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.817714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.817893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.818065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.818241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.818266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.818467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.818611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.818663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.818818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.819186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.819538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.819759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.819939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.820104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.820132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.766 qpair failed and we were unable to recover it. 00:31:06.766 [2024-04-24 05:26:43.820296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.766 [2024-04-24 05:26:43.820422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.820462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.820639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.820773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.820800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.821014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.821360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.821641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.821830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.821951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.822324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.822670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.822858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.823035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.823330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.823731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.823948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.824113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.824466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.824743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.824913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.825125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.825285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.825313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.825489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.825639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.825665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.825814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.826194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.826603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.826814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.826953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.827309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.827653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.827877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.828052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.828434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.828787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.828924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.829099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.829475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.829845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.829986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.830194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.830391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.830416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.767 qpair failed and we were unable to recover it. 00:31:06.767 [2024-04-24 05:26:43.830610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.767 [2024-04-24 05:26:43.830778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.830807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.831006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.831393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.831752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.831961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.832089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.832437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.832810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.832953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.833132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.833249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.833273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.833424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.833614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.833646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.833851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.833996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.834019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.834162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.834307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.834330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.834486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.834622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.834674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.834867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.835256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.835584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.835759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.835909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.836279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.836618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.836781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.836966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.837243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.837562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.837756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.837923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.838283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.838625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.838786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.838903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.839044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.839071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.768 qpair failed and we were unable to recover it. 00:31:06.768 [2024-04-24 05:26:43.839241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.768 [2024-04-24 05:26:43.839379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.839408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.839577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.839696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.839722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.839839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.840206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.840544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.840837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.840983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.841108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.841310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.841353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.841499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.841621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.841653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.841844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.841987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.842155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.842483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.842815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.842986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.843148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.843302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.843326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.843478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.843676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.843719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.843848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.844272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.844601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.844797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.844946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.845331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.845615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.845804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.845957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.846222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.846547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.846849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.846999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.847024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.847153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.847320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.847345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.769 [2024-04-24 05:26:43.847487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.847639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.769 [2024-04-24 05:26:43.847664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.769 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.847825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.848171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.848490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.848801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.848976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.849151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.849471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.849821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.849994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.850120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.850422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.850753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.850924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.851046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.851346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.851704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.851887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.852011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.852312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.852607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.852760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.852932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.853262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.853585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.853770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.853888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.854016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.854040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.854192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.854394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.854437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.770 qpair failed and we were unable to recover it. 00:31:06.770 [2024-04-24 05:26:43.854559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.770 [2024-04-24 05:26:43.854680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.854705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.854852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.854968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.854992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.855144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.855453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.855793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.855990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.856158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.856444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.856755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.856904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.857049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.857337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.857684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.857830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.857957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.858310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.858639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.858807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.858937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.859226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.859551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.859717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.859872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.860196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.860495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.860813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.771 [2024-04-24 05:26:43.860987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.771 qpair failed and we were unable to recover it. 00:31:06.771 [2024-04-24 05:26:43.861136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.861458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.861823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.861995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.862172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.862471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.862799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.862947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.863099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.863426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.863704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.863874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.864047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.864328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.864639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.864823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.864974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.865257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.865553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.865730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.865880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.866182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.866495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.866806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.866956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.867459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.867788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.867964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.772 qpair failed and we were unable to recover it. 00:31:06.772 [2024-04-24 05:26:43.868117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.772 [2024-04-24 05:26:43.868236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.868261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.868415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.868587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.868616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.868795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.868940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.868964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.869113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.869472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.869808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.869959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.870124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.870427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.870749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.870902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.871055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.871352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.871683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.871855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.871982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.872275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.872596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.872777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.872923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.873304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.873639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.873837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.873987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.874288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.773 [2024-04-24 05:26:43.874616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.773 [2024-04-24 05:26:43.874775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.773 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.874929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.875249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.875548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.875851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.875996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.876140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.876442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.876772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.876946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.877096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.877410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.877747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.877949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.878135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.878441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.878795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.878971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.879102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.879403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.879707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.879906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.880064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.880447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.880821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.880977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.881103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.881253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.881279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.881454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.881577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.774 [2024-04-24 05:26:43.881602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.774 qpair failed and we were unable to recover it. 00:31:06.774 [2024-04-24 05:26:43.881727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.881856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.881881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.882001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.882357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.882672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.882847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.883001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.883298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.883594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.883778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.883909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.884204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.884514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.884720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.884874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.885169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.885434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.885778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.885919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.886054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.886353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.886644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.886815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.886939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.887265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.887537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.775 qpair failed and we were unable to recover it. 00:31:06.775 [2024-04-24 05:26:43.887820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.775 [2024-04-24 05:26:43.887941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.887966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.888091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.888403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.888736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.888905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.889021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.889341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.889658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.889831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.889967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.890294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.890596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.890779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.890896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.891219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.891513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.891665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.891854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.892194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.892480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.892785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.892937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.893087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.893434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.893710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.893895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.894006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.894299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.894594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.894762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.894890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.895186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.895527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.895834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.895977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.896123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.896274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.896298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.896432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.896585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.776 [2024-04-24 05:26:43.896609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.776 qpair failed and we were unable to recover it. 00:31:06.776 [2024-04-24 05:26:43.896781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.896958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.896991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.897195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.897378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.897428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.897643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.897808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.897839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.898051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.898283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.898330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.898535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.898703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.898734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.898928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d5c000b90 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.899311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.899581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.899774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.899961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.900345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.900724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.900907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.901050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.901387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.901975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.902122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.902465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.902795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.902968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.903091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.903455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.903794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.903972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.904144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.904499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.904792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.904964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.905087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.905489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.905791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.905960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.906108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.906233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.906257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.906454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.906600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.906626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.906848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.906977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.907003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.777 qpair failed and we were unable to recover it. 00:31:06.777 [2024-04-24 05:26:43.907156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.777 [2024-04-24 05:26:43.907302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.907327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.907495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.907654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.907700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.907850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.907997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.908192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.908556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.908852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.908995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.909117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.909404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.909780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.909923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.910081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.910410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.910755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.910953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.911073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.911398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.911723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.911894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.912085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.912440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.912802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.912944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.913068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.913464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.913783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.913925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.914081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.914391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.914724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.914865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.915006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.915336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.915635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.915800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.915948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.916097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.916121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.916296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.916435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.916460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.778 qpair failed and we were unable to recover it. 00:31:06.778 [2024-04-24 05:26:43.916585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.778 [2024-04-24 05:26:43.916732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.916757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.916908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.917256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.917568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.917771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.917921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.918273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.918635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.918812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.918961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.919250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.919640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.919840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.920002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.920370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.920678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.920880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.921039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.921362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.921712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.921912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.922084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.922243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.922270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.922514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.922637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.922662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.922855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.923177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.923538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.923743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.923865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.924029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.924054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.924229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.924452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.779 [2024-04-24 05:26:43.924493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.779 qpair failed and we were unable to recover it. 00:31:06.779 [2024-04-24 05:26:43.924664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.924815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.924843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.924966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.925279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.925611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.925806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.925929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.926252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.926580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.926752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.926927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.927234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.927557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.927864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.927988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.928161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.928452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.928812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.928989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.929141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.929513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.929800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.929977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.930127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.930414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.930758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.930926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.931077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.931379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.931704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.931881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.932032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.932393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.932681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.932882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.933033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.933303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.933621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.933845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.780 qpair failed and we were unable to recover it. 00:31:06.780 [2024-04-24 05:26:43.933973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.780 [2024-04-24 05:26:43.934094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.934120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.934244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.934371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.934395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.934549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.934695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.934721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.934844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.935217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.935531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.935831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.935981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.936154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.936474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.936781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.936950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.937069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.937445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.937779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.937983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.938142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.938304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.938328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.938509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.938689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.938714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.938869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.939185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.939509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.939842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.939995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.940172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.940522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.940850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.940980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.941127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.941429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.941746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.941892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.942037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.942369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.942734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.942915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.943095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.943255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.943279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.781 qpair failed and we were unable to recover it. 00:31:06.781 [2024-04-24 05:26:43.943415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.781 [2024-04-24 05:26:43.943538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.943563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.943694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.943848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.943872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.944026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.944334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.944668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.944844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.944998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.945268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.945585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.945739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.945862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.946218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.946533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.946712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.946863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.947187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.947535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.947841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.947978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.948151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.948411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.948696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.948875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.948996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.949293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.949614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.949814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.950282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.950589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.950772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.950898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.951176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.951518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.951741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.951893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.952241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.782 [2024-04-24 05:26:43.952613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.782 [2024-04-24 05:26:43.952792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.782 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.952910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.953232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.953552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.953854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.953990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.954017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.954173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.954320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.954345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.954494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.954647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.954673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.954823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.955173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.955519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.955837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.955978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.956100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.956418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.956736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.956910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.957062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.957363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.957692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.957874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.958030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.958298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.958621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.958786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.958914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.959213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.959537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.959862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.959996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.960168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.960462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.960767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.960939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.961116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.961291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.961315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.961459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.961604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.961636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.783 [2024-04-24 05:26:43.961825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.961977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.783 [2024-04-24 05:26:43.962001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.783 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.962148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.962297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.962321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.962472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.962692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.962718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.962868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.962986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.963010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.963163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.963339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.963364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.963550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.963706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.963731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.963886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.964218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.964488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.964803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.964978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.965155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.965446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.965741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.965894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.966080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.966211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.966236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.966387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.966579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.966606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.966830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.966983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.967147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.967466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.967797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.967971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.784 qpair failed and we were unable to recover it. 00:31:06.784 [2024-04-24 05:26:43.968119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.968244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.784 [2024-04-24 05:26:43.968268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.968395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.968532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.968555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.968742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.968889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.968931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.969107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.969311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.969339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.969470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.969646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.969690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.969850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.970273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.970566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.970714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.970866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.971193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.971543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.971723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.971852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.972208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.972569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.972741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.972868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.973208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.973527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.973722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.973874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.974238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.974570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.974768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.974923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.975240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.975556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.975734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.975887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.976032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.976057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.785 qpair failed and we were unable to recover it. 00:31:06.785 [2024-04-24 05:26:43.976202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.785 [2024-04-24 05:26:43.976327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.976351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.976477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.976601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.976625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.976778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.976902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.976928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.977097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.977417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.977724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.977909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.978028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.978328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.978691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.978841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.978987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.979305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.979598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.979751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.979911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.980236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.980561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.980707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.980855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.981193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.981499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.981787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.981960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.982111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.982247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.982271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.786 [2024-04-24 05:26:43.982388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.786 [2024-04-24 05:26:43.982527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.786 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.982707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.982845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.982996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.983316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.983644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.983792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.983949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.984252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.984607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.984760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.984905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.985202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.985474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.985810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.985962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.986110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.986417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.986761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.986942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.987093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.987386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.987667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.987842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.987971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.988323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.988586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.988767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.988916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.989239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.989587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.989802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.989933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.990237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.990599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.990764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.990947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.991124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.991148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.991304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.991480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.991505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.787 qpair failed and we were unable to recover it. 00:31:06.787 [2024-04-24 05:26:43.991637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.787 [2024-04-24 05:26:43.991757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.991782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.991911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.992217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.992484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.992810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.992951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.993080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.993341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.993641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.993818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.993967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.994293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.994613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.994849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.994967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.995311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.995611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.995762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.995891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.996221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.996575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.996760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.996889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.997157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.997496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.997809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.997972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.998089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.998388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.998696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.998865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.999013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.999370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.999660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:43.999815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:43.999969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:44.000095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:44.000120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:44.000273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:44.000424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:44.000448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.788 qpair failed and we were unable to recover it. 00:31:06.788 [2024-04-24 05:26:44.000569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.788 [2024-04-24 05:26:44.000744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.000770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.000901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.001245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.001550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.001874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.001990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.002015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.002191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.002337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.002365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.002521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.002702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.002737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.002860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.003220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.003596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.003805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:06.789 [2024-04-24 05:26:44.003936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.004078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.789 [2024-04-24 05:26:44.004114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:06.789 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.004259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.004412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.004436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.004650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.004816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.004841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.005268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.005595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.005758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.005932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.006196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.006487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.070 qpair failed and we were unable to recover it. 00:31:07.070 [2024-04-24 05:26:44.006803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.070 [2024-04-24 05:26:44.006957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.006982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.007162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.007455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.007801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.007972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.008116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.008438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.008789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.008966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.009116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.009445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.009812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.009992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.010111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.010411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.010683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.010834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.010982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.011331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.011656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.011833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.011955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.012228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.012576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.012726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.012869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.013176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.013502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.013819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.013966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.014110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.014445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.014815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.014968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.015118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.015452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.015778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.015953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.071 qpair failed and we were unable to recover it. 00:31:07.071 [2024-04-24 05:26:44.016131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.071 [2024-04-24 05:26:44.016280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.016305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.016473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.016601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.016625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.016789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.016946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.016971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.017097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.017365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.017653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.017825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.017976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.018249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.018575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.018750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.018874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.019155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.019473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.019764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.019918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.020070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.020377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.020670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.020826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.020956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.021253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.021580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.021761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.021908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.022243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.022636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.022798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.022952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.023274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.023573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.023857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.023998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.024152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.024464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.024769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.024918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.072 qpair failed and we were unable to recover it. 00:31:07.072 [2024-04-24 05:26:44.025057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.072 [2024-04-24 05:26:44.025206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.025231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.025378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.025535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.025559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.025712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.025845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.025876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.026037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.026340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.026697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.026885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.027039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.027386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.027742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.027921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.028073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.028369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.028659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.028816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.028962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.029312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.029616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.029774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.029927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.030248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.030604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.030803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.030953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.031271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.031574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.031781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.031939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.032278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.032653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.032802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.032932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.033308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.033573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.033772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.033928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.034084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.034109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.073 [2024-04-24 05:26:44.034233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.034387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.073 [2024-04-24 05:26:44.034412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.073 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.034561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.034675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.034701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.034830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.034985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.035138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.035446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.035794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.035945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.036069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.036394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.036750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.036905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.037026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.037348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.037617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.037838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.037966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.038279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.038601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.038795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.038972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.039295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.039658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.039833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.039986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.040284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.040612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.040786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.040925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.041248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.041526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.041704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.041858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.042205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.042499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.042812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.042967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.043122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.043300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.043325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.074 [2024-04-24 05:26:44.043498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.043647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.074 [2024-04-24 05:26:44.043682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.074 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.043831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.043952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.043977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.044104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.044432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.044770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.044955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.045140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.045467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.045783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.045940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.046092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.046421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.046819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.046993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.047142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.047467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.047815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.047994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.048141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.048452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.048754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.048928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.049082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.049400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.049702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.049875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.049999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.050356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.050652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.050825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.051002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.051171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.051195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.051348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.051524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.075 [2024-04-24 05:26:44.051549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.075 qpair failed and we were unable to recover it. 00:31:07.075 [2024-04-24 05:26:44.051695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.051821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.051845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.052019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.052319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.052666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.052819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.052972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.053303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.053612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.053791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.053967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.054265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.054613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.054773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.054932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.055232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.055531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.055828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.055977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.056100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.056392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.056714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.056885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.057039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.057345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.057692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.057881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.058037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.058366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.058666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.058844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.058960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.059255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.059606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.059759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.059878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.060151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.060416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.076 qpair failed and we were unable to recover it. 00:31:07.076 [2024-04-24 05:26:44.060791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.076 [2024-04-24 05:26:44.060914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.060938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.061057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.061360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.061692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.061850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.061980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.062339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.062617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.062777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.062923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.063215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.063571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.063730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.063853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.064189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.064538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.064866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.064990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.065147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.065479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.065828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.065998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.066113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.066432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.066705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.066885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.067084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.067387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.067686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.067840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.067992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.068340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.068687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.068889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.069012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.069361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.069653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.069857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.077 qpair failed and we were unable to recover it. 00:31:07.077 [2024-04-24 05:26:44.069982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.077 [2024-04-24 05:26:44.070133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.070159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.070313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.070491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.070515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.070668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.070794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.070819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.070941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.071246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.071546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.071719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.071868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.072203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.072501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.072813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.072969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.073119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.073446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.073724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.073872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.074020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.074295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.074622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.074786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.074930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.075254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.075613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.075785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.075959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.076299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.076566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.076763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.076885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.077212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.077528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.077841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.077993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.078125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.078395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.078710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.078889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.078 qpair failed and we were unable to recover it. 00:31:07.078 [2024-04-24 05:26:44.079016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.078 [2024-04-24 05:26:44.079166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.079190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.079366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.079538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.079562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.079698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.079874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.079899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.080055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.080388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.080704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.080911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.081058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.081331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.081677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.081828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.081980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.082274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.082599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.082814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.082957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.083274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.083598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.083770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.083915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.084203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.084555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.084725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.084878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.085177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.085495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.085820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.085994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.086142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.086417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.086791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.086993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.079 [2024-04-24 05:26:44.087142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.087264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.079 [2024-04-24 05:26:44.087288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.079 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.087463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.087616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.087653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.087787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.087961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.087985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.088136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.088439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.088768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.088916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.089095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.089420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.089900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.090025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.090377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.090706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.090849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.090968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.091339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.091667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.091816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.091992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.092275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.092599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.092780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.092903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.093224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.093520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.093815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.093992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.094137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.094264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.094293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.094440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.094607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.094645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.094822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.094977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.095119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.095446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.095792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.095964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.096086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.096233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.096257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.080 [2024-04-24 05:26:44.096384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.096536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.080 [2024-04-24 05:26:44.096560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.080 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.096715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.096891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.096916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.097061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.097323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.097626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.097806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.097951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.098277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.098547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.098811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.098984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.099161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.099490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.099823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.099969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.100091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.100424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.100762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.100936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.101088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.101387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.101711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.101884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.102062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.102332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.102620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.102838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.102980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.103323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.103603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.103768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.103915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.104236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.104536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.104758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.104885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.105147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.081 [2024-04-24 05:26:44.105448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.081 [2024-04-24 05:26:44.105619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.081 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.105772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.105945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.105969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.106124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.106438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.106749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.106931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.107051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.107374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.107679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.107848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.108020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.108364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.108670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.108836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.109009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.109306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.109574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.109772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.109897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.110180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.110561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.110734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.110859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.111154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.111450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.111773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.111945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.112072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.112353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.112676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.112828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.112978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.113318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.113622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.113819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.113946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.114248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.114580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.082 [2024-04-24 05:26:44.114754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.082 qpair failed and we were unable to recover it. 00:31:07.082 [2024-04-24 05:26:44.114878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.115200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.115526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.115860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.115989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.116189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.116458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.116761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.116933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.117053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.117386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.117708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.117851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.117972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.118294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.118650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.118830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.118980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.119287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.119658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.119803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.119929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.120249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.120545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.120838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.120986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.121166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.121433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.121775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.121920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.122068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.122369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.122689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.122843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.122996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.123123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.123147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.123322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.123450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.123474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.083 qpair failed and we were unable to recover it. 00:31:07.083 [2024-04-24 05:26:44.123597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.083 [2024-04-24 05:26:44.123717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.123742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.123929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.124251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.124552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.124997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.125144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.125451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.125750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.125903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.126059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.126352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.126697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.126875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.127044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.127390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.127679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.127858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.127984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.128335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.128651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.128819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.128971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.129238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.129548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.129729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.129904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.130240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.130527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.130702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.130848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.131000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.131024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.131154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.131296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.084 [2024-04-24 05:26:44.131320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.084 qpair failed and we were unable to recover it. 00:31:07.084 [2024-04-24 05:26:44.131451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.131598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.131622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.131761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.131891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.131915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.132041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.132331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.132657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.132827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.132947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.133245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.133563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.133740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.133892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.134234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.134558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.134712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.134891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.135224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.135556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.135732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.135912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.136212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.136485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.136817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.136962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.137113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.137465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.137770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.137942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.138067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.138359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.138685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.138858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.138983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.139299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.139621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.139779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.139904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.140078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.140103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.140259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.140437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.140462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.085 qpair failed and we were unable to recover it. 00:31:07.085 [2024-04-24 05:26:44.140612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.085 [2024-04-24 05:26:44.140785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.140811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.140964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.141291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.141646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.141803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.141968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.142230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.142534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.142813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.142953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.143106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.143430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.143773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.143946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.144075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.144376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.144734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.144881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.145031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.145303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.145604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.145797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.145967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.146250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.146594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.146780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.146901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.147206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.147526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.147824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.147988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.148187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.148481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.148799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.148949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.149074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.149371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.086 qpair failed and we were unable to recover it. 00:31:07.086 [2024-04-24 05:26:44.149692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.086 [2024-04-24 05:26:44.149871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.149999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.150256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.150554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.150766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.150924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.151227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.151516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.151826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.151983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.152183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.152440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.152785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.152963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.153088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.153345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.153619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.153771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.153927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.154219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.154487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.154794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.154975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.155118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.155479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.155790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.155970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.156127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.156450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.156765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.156936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.157066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.157398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.157720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.157889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.158038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.158418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.087 [2024-04-24 05:26:44.158680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.087 [2024-04-24 05:26:44.158856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.087 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.159014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.159304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.159654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.159839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.159969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.160266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.160543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.160830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.160973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.161121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.161435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.161740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.161915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.162035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.162313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.162661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.162815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.162966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.163255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.163552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.163730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.163912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.164192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.164520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.164818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.164969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.165083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.165355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.165702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.165879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.166035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.166155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.166180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.166299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.166451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.088 [2024-04-24 05:26:44.166475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.088 qpair failed and we were unable to recover it. 00:31:07.088 [2024-04-24 05:26:44.166625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.166788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.166813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.166944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.167247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.167522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.167823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.167977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.168172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.168444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.168788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.168962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.169120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.169421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.169723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.169894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.170042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.170369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.170692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.170868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.171019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.171334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.171694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.171834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.171952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.172260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.172581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.172753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.172932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.173239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.173531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.173722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.173846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.174167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.174497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.174803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.174950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.175096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.175242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.175266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.175410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.175530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.089 [2024-04-24 05:26:44.175554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.089 qpair failed and we were unable to recover it. 00:31:07.089 [2024-04-24 05:26:44.175703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.175829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.175854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.176006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.176299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.176616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.176796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.176986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.177653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.177804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.177932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.178226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.178553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.178829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.178982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.179164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.179474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.179750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.179926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.180076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.180423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.180736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.180878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.181058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.181409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.181759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.181931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.182082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.182391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.182721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.182864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.183013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.183335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.183656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.183832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.183952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.184249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.184580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.184757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.090 qpair failed and we were unable to recover it. 00:31:07.090 [2024-04-24 05:26:44.184906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.090 [2024-04-24 05:26:44.185050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.185224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.185546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.185845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.185992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.186148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.186444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.186753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.186892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.187008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.187332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.187619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.187809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.187955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.188256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.188524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.188807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.188946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.189096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.189370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.189667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.189824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.189983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.190292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.190617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.190792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.190936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.191265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.191577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.191729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.191875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.192172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.192494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.192828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.192976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.193149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.193309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.193334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.193483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.193667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.091 qpair failed and we were unable to recover it. 00:31:07.091 [2024-04-24 05:26:44.193823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.091 [2024-04-24 05:26:44.193974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.193998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.194149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.194441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.194773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.194920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.195069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.195396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.195708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.195885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.196011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.196316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.196641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.196816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.196944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.197295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.197568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.197846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.197991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.198172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.198495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.198817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.198984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.199182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.199421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.199445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.199599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.199743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.199769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.199947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.200255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.200551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.200725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.200879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.201203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.201540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.201723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.201871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.202195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.202495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.202705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.202858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.203016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.203040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.092 qpair failed and we were unable to recover it. 00:31:07.092 [2024-04-24 05:26:44.203191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.203338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.092 [2024-04-24 05:26:44.203363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.203509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.203656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.203681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.203807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.203934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.203960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.204079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.204379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.204697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.204871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.205044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.205388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.205717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.205859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.206025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.206380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.206737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.206889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.207016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.207298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.207651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.207846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.208000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.208292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.208641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.208818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.208948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.209276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.209603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.209781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.209898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.210190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.210531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.210818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.210978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.093 qpair failed and we were unable to recover it. 00:31:07.093 [2024-04-24 05:26:44.211134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.211258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.093 [2024-04-24 05:26:44.211283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.211428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.211601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.211626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.211776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.211953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.211978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.212105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.212397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.212689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.212836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.212982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.213255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.213579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.213762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.213881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.214189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.214479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.214782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.214956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.215076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.215394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.215712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.215891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.216020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.216289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.216645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.216793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.216944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.217240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.217493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.217791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.217978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.218154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.218484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.218844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.218987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.219162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.219485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.219772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.219943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.094 [2024-04-24 05:26:44.220095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.220241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.094 [2024-04-24 05:26:44.220266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.094 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.220397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.220548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.220572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.220723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.220877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.220901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.221052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.221351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.221698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.221848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.221976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.222279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.222585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.222744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.222897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.223184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.223510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.223831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.223988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.224135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.224424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.224738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.224881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.225007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.225331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.225686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.225864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.226037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.226343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.226703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.226857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.226978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.227294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.227636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.227814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.227971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.228263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.228565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.228714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.228861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.229008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.229033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.095 qpair failed and we were unable to recover it. 00:31:07.095 [2024-04-24 05:26:44.229153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.229307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.095 [2024-04-24 05:26:44.229332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.229477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.229626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.229657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.229774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.229919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.229943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.230071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.230391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.230736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.230913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.231046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.231375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.231671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.231842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.231991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.232285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.232606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.232822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.232977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.233276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.233596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.233771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.233897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.234237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.234547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.234730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.234912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.235188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.235524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.235821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.235969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.236119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.236413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.236766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.236941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.237055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.237410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.237716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.237918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.238076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.238228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.238253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.096 qpair failed and we were unable to recover it. 00:31:07.096 [2024-04-24 05:26:44.238406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.238520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.096 [2024-04-24 05:26:44.238545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.238668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.238794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.238820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.238966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.239265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.239584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.239782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.239903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.240198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.240499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.240825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.240979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.241154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.241481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.241787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.241962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.242116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.242410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.242711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.242894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.243070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.243358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.243689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.243839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.243964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.244283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.244603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.244779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.244926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.245255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.245519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.245841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.245988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.246165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.246501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.246790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.246940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.247086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.247235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.097 [2024-04-24 05:26:44.247259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.097 qpair failed and we were unable to recover it. 00:31:07.097 [2024-04-24 05:26:44.247410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.247580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.247608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.247737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.247863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.247888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.248004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.248284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.248642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.248809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.248959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.249260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.249599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.249760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.249917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.250244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.250537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.250681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.250836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.251142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.251488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.251835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.251979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.252158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.252438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.252760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.252935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.253056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.253375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.253698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.253869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.254028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.254351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.254697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.254851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.255004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.255168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.255192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.255340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.255519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.255546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.098 qpair failed and we were unable to recover it. 00:31:07.098 [2024-04-24 05:26:44.255728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.098 [2024-04-24 05:26:44.255885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.255911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.256067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.256362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.256701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.256877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.257032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.257323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.257623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.257791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.257943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.258290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.258602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.258760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.258910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.259197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.259528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.259828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.259971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.260100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.260431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.260767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.260920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.261096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.261417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.261750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.261921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.262045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.262383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.262706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.262878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.263025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.263352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.263639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.263824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.263955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.264303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.264624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.264779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.099 qpair failed and we were unable to recover it. 00:31:07.099 [2024-04-24 05:26:44.264938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.099 [2024-04-24 05:26:44.265090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.265265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.265563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.265861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.265984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.266188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.266487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.266794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.266971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.267149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.267457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.267792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.267933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.268111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.268408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.268748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.268926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.269082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.269391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.269658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.269837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.269987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.270291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.270598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.270784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.270933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.271222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.271554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.271732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.271856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.272185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.272474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.272777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.272927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.273057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.273397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.273679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.273852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.100 qpair failed and we were unable to recover it. 00:31:07.100 [2024-04-24 05:26:44.273981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.274106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.100 [2024-04-24 05:26:44.274130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.274278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.274428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.274452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.274603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.274762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.274788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.274941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.275240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.275546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.275816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.275993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.276173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.276349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.276373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.276530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.276681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.276706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.276862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.276984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.277134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.277430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.277759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.277935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.278086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.278382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.278677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.278877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.279023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.279314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.279664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.279816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.279951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.280274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.280598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.280806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.280931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.281218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.281513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.281819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.281962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.282087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.282429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.282751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.282923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.101 [2024-04-24 05:26:44.283043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.283159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.101 [2024-04-24 05:26:44.283184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.101 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.283337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.283477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.283501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.283646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.283793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.283818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.283968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.284297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.284597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.284780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.284957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.285218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.285564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.285734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.285912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.286276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.286562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.286703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.286854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.287205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.287492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.287766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.287910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.288084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.288406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.288695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.288867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.288991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.289322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.289645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.289796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.289975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.290257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.290528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.102 [2024-04-24 05:26:44.290797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.102 [2024-04-24 05:26:44.290944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.102 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.291121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.291246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.291271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.291436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.291621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.291662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.291832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.291982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.292154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.292451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.292737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.292918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.293069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.293345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.293674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.293816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.293957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.294257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.294569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.294720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.294895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.295228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.295537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.295752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.295869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.296213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.296555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.296871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.296997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.297021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.297141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.297315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.297339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.297538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.297684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.297709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.297856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.298151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.298444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.298793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.298947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.299068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.299402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.299768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.299943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.103 qpair failed and we were unable to recover it. 00:31:07.103 [2024-04-24 05:26:44.300064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.103 [2024-04-24 05:26:44.300238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.300263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.300384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.300511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.300536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.300688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.300835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.300859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.300979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.301299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.301598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.301799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.301928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.302235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.302527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.302825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.302983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.303149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.303455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.303771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.303913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.304041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.304368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.304683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.304835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.304979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.305259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.305553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.305707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.305884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.306171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.306495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.306857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.306977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.307120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.307450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.307734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.307907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.308032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.308325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.308655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.308796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.104 qpair failed and we were unable to recover it. 00:31:07.104 [2024-04-24 05:26:44.308950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.104 [2024-04-24 05:26:44.309067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.309214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.309517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.309818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.309992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.310117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.310411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.310747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.310923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.311048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.311387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.311683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.311851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.312002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.312331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.312592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.312768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.312916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.313190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.313489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.313847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.313976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.314152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.314518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.314821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.314996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.315147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.315439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.315788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.315976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.316134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.316462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.316812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.316982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.105 [2024-04-24 05:26:44.317141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.317264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.105 [2024-04-24 05:26:44.317288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.105 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.317467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.317592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.317616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.386 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.317822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.317977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.386 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.318139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.386 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.318508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.386 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.318812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.318963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.386 qpair failed and we were unable to recover it. 00:31:07.386 [2024-04-24 05:26:44.319112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.319238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.386 [2024-04-24 05:26:44.319263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.319430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.319579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.319604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.319732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.319879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.319904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.320028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.320302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.320580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.320736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.320916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.321244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.321507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.321806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.321984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.322137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.322472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.322794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.322975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.323094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.323416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.323749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.323892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.324065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.324414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.324775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.324951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.325104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.325439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.325780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.325959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.326140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.326290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.326315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.387 qpair failed and we were unable to recover it. 00:31:07.387 [2024-04-24 05:26:44.326457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.387 [2024-04-24 05:26:44.326623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.326657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.326812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.326973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.326998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.327178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.327365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.327390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.327515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.327666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.327692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.327841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.328297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.328618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.328806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.328940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.329265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.329555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.329704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.329859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.330197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.330576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.330776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.330903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.331182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.331534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.331694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.331851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.332236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.332574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.332767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.332918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.333305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.333709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.333889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.334087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.334248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.334297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.334473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.334646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.334689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.334849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.335190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.388 [2024-04-24 05:26:44.335531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.388 [2024-04-24 05:26:44.335681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.388 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.335829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.335980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.336182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.336503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.336811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.336985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.337180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.337337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.337367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.337625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.337800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.337824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.337967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.338300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.338670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.338846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.339023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.339330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.339657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.339833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.340004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.340308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.340681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.340843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.340969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.341273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.341644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.341807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.341925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.342247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.389 [2024-04-24 05:26:44.342572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.389 [2024-04-24 05:26:44.342780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.389 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.342931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.343299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.343588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.343754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.343902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.344223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.344567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.344740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.344888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.345217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.345532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.345735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.345860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.346223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.346618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.346788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.346920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.347340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.347705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.347884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.348033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.348359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.348690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.348867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.348994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.349313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.349658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.349819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.349974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.350269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.350606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.350785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.350908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.351231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.390 [2024-04-24 05:26:44.351560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.390 [2024-04-24 05:26:44.351760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.390 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.351911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.352235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.352495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.352830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.352991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.353017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.353136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.353290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.353315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.353535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.353740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.353766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.353942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.354293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.354560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.354719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.354900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.355411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.355765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.355948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.356074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.356425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.356748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.356926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.357083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.357408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.357778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.357957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.358114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.358235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.358261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.358453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.358635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.358664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.358819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.358994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.359019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.391 qpair failed and we were unable to recover it. 00:31:07.391 [2024-04-24 05:26:44.359262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.391 [2024-04-24 05:26:44.359470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.359498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.359671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.359795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.359820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.359935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.360247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.360595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.360749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.360908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.361238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.361567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.361728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.361907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.362255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.362616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.362836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.363008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.363411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.363787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.363969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.364148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.364417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.364743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.364923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.365042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.365347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.365690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.365871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.366020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.366178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.366203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.392 qpair failed and we were unable to recover it. 00:31:07.392 [2024-04-24 05:26:44.366352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.392 [2024-04-24 05:26:44.366506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.366531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.366649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.366811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.366836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.367012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.367334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.367656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.367839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.367991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.368323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.368691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.368891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.369071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.369405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.369773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.369950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.370069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.370391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.370717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.370858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.370986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.371266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.371561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.371741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.371905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.372267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.372615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.372806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.372925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.373212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.373575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.373751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.393 [2024-04-24 05:26:44.373865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.374015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.393 [2024-04-24 05:26:44.374041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.393 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.374193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.374326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.374350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.374494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.374671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.374696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.374875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.375223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.375554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.375734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.375862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.376153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.376475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.376801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.376972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.377120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.377446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.377753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.377931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.378059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.378330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.378652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.378806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.378936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.379237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.379605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.379766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.379921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.380281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.380618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.380824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.381000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.381689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.381836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.381987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.382166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.394 [2024-04-24 05:26:44.382191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.394 qpair failed and we were unable to recover it. 00:31:07.394 [2024-04-24 05:26:44.382371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.382516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.382541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.382658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.382815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.382841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.382960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.383298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.383617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.383814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.383966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.384318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.384580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.384721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.384889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.385214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.385494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.385813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.385957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.386104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.386449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.386716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.386871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.387025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.387355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.387709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.387885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.388038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.388371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.395 [2024-04-24 05:26:44.388675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.395 [2024-04-24 05:26:44.388850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.395 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.389018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.389370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.389676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.389827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.389959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.390288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.390609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.390818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.390953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.391277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.391602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.391798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.391949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.392301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.392684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.392860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.393012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.393312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.393680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.393855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.394022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.394348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.394697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.394848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.394973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.395309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.395615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.395795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.395922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.396246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.396596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.396781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.396898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.397074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.397099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.396 [2024-04-24 05:26:44.397226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.397346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.396 [2024-04-24 05:26:44.397373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.396 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.397521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.397680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.397706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.397872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.397992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.398166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.398501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.398807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.398960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.399101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.399401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.399720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.399871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.400049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.400357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.400661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.400839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.400962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.401294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.401577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.401728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.401880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.402195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.402533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.397 [2024-04-24 05:26:44.402827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.402979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.397 [2024-04-24 05:26:44.403004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.397 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.403154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.403331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.403356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.403504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.403658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.403684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.403834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.403981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.404154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.404531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.404830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.404976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.405152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.405500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.405822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.405978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.406002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.406155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.406330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.406356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.406503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.406669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.406695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.406857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.407237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.407538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.407739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.407871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.408226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.408574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.408761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.408916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.409268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.409569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.409845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.409974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.410150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.410468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.410745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.410886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.411066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.411248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.398 [2024-04-24 05:26:44.411277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.398 qpair failed and we were unable to recover it. 00:31:07.398 [2024-04-24 05:26:44.411397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.411542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.411567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.411702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.411854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.411879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.412000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.412380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.412670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.412814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.412963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.413246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.413547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.413698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.413860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.414183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.414544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.414746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.414894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.415247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.415545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.415747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.415893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.416192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.416457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.416815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.416987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.417129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.417470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.417776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.417956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.418107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.418282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.418307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.399 qpair failed and we were unable to recover it. 00:31:07.399 [2024-04-24 05:26:44.418464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.418587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.399 [2024-04-24 05:26:44.418611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.418771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.418890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.418915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.419091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.419480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.419811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.419986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.420102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.420420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.420707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.420853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.420978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.421350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.421701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.421875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.421998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.422319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.422608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.422786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.422941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.423292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.423645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.423843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.423967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.424292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.424643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.424843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.424960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.425301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.425647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.425823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.425943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.426266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.426611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.426793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.426939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.427108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.400 [2024-04-24 05:26:44.427133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.400 qpair failed and we were unable to recover it. 00:31:07.400 [2024-04-24 05:26:44.427308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.427458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.427483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.427660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.427819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.427845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.427970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.428265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.428595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.428792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.428964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.429314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.429661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.429816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.429942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.430262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.430606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.430822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.430976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.431277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.431540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.431854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.431996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.432022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.432198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.432348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.432373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.432524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.432670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.432696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.432875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.433180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.433506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.433846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.433999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.434023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.434148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.434318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.434342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.401 qpair failed and we were unable to recover it. 00:31:07.401 [2024-04-24 05:26:44.434473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.401 [2024-04-24 05:26:44.434617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.434648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.434830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.434951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.434975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.435124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.435472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.435818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.435994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.436143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.436469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.436810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.436979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.437126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.437271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.437297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.437449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.437625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.437658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.437832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.437993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.438162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.438456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.438790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.438974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.439134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.439458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.439771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.439971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.440093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.440391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.440749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.440892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.441034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.441352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.441676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.441888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.442089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.442382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.442762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.442931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.402 qpair failed and we were unable to recover it. 00:31:07.402 [2024-04-24 05:26:44.443051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.443227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.402 [2024-04-24 05:26:44.443256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.443427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.443589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.443618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.443802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.443945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.443974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.444147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.444319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.444349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.444552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.444722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.444752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.444896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.445313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.445728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.445934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.446075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.446222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.446251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.446442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.446640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.446669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.446835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.447286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.447640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.447833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.448023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.448285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.448335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.448475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.448667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.448697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.448888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.449409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.449786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.449984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.450142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.450294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.403 [2024-04-24 05:26:44.450324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.403 qpair failed and we were unable to recover it. 00:31:07.403 [2024-04-24 05:26:44.450519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.450700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.450728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.450907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.451293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.451747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.451973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.452121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.452277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.452303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.452476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.452645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.452675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.452860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.453242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.453662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.453899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.454032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.454281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.454344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.454538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.454824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.454873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.455053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.455232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.455258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.455417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.455596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.455645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.455823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.455991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.456019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.456214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.456369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.456395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.456569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.456747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.456778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.456956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.457300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.457692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.457910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.458082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.458253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.458282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.458480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.458669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.458731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.458865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.459228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.459608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.459815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.459984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.460113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.404 [2024-04-24 05:26:44.460143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.404 qpair failed and we were unable to recover it. 00:31:07.404 [2024-04-24 05:26:44.460263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.460416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.460442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.460644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.460813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.460842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.461021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.461410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.461721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.461941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.462110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.462247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.462277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.462471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.462640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.462669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.462845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.462996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.463040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.463203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.463399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.463453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.463611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.463784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.463819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.463976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.464345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.464755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.464928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.465129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.465356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.465406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.465565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.465759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.465789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.465958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.466354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.466696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.466864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.467050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.467279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.467337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.467532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.467653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.467683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.467839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.468245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.468639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.468860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.469010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.469397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.469755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.469930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.470137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.470278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.405 [2024-04-24 05:26:44.470306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.405 qpair failed and we were unable to recover it. 00:31:07.405 [2024-04-24 05:26:44.470499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.470699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.470728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.470921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.471300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.471699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.471875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.472022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.472372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.472776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.472969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.473153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.473330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.473371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.473565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.473765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.473795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.473961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.474333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.474656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.474868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.475019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.475360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.475752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.475900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.476099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.476286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.476315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.476504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.476640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.476668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.476830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.477346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.477728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.477924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.478078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.478255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.478281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.478456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.478661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.478688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.478838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.478979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.479008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.479169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.479416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.479477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.479650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.479802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.479845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.480036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.480282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.480335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.480525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.480716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.480747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.480939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.481185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.481239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.406 [2024-04-24 05:26:44.481439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.481607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.406 [2024-04-24 05:26:44.481645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.406 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.481784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.481923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.481952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.482150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.482444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.482784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.482978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.483122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.483324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.483350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.483493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.483693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.483719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.483878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.484252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.484616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.484830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.485030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.485323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.485384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.485549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.485710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.485740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.485898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.486288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.486606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.486841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.487027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.487336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.487385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.487582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.487781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.487811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.487974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.488339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.488733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.488937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.489090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.489257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.489287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.489426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.489592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.489620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.489787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.489990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.490041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.490204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.490409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.490435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.490584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.490706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.490733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.490944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.491082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.491111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.491304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.491479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.491504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.491737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.492001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.492054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.407 qpair failed and we were unable to recover it. 00:31:07.407 [2024-04-24 05:26:44.492239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.492419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.407 [2024-04-24 05:26:44.492444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.492587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.492779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.492808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.492950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.493301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.493646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.493866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.494003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.494380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.494747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.494952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.495093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.495237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.495262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.495427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.495602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.495637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.495801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.496217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.496690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.496870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.497021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.497212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.497241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.497400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.497591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.497620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.497767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.498257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.498675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.498890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.499017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.499193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.499219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.499361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.499556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.499585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.499790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.499982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.500012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.500178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.500377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.500418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.500573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.500737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.500763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.500932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.501169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.501218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.501429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.501599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.501634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.408 qpair failed and we were unable to recover it. 00:31:07.408 [2024-04-24 05:26:44.501778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.408 [2024-04-24 05:26:44.501925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.501954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.502122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.502281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.502309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.502501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.502763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.502793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.502949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.503288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.503593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.503794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.503989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.504440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.504803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.504976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.505167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.505332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.505362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.505569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.505747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.505774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.505926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.506279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.506646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.506853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.507025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.507366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.507746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.507939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.508125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.508282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.508310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.508477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.508665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.508695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.508857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.509431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.509794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.509988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.510187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.510332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.510357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.510493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.510640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.510681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.510882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.511313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.511675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.511872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.512041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.512193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.512219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.512394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.512584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.409 [2024-04-24 05:26:44.512613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.409 qpair failed and we were unable to recover it. 00:31:07.409 [2024-04-24 05:26:44.512773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.512951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.512979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.513134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.513266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.513296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.513465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.513626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.513660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.513865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.514346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.514708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.514930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.515094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.515341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.515402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.515568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.515719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.515749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.515879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.516227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.516564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.516741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.516923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.517277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.517609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.517840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.518035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.518205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.518264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.518449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.518640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.518669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.518814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.519199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.519543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.519845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.519989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.520018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.520152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.520322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.520351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.520516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.520655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.520685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.520892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.521335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.521656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.521827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.521993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.522273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.522331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.522529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.522660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.522687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.522819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.522994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.523023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.523217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.523355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.410 [2024-04-24 05:26:44.523384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.410 qpair failed and we were unable to recover it. 00:31:07.410 [2024-04-24 05:26:44.523543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.523706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.523736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.523887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.524277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.524665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.524840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.525044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.525211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.525245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.525394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.525633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.525663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.525856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.526300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.526657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.526884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.527023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.527328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.527744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.527928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.528086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.528242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.528271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.528432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.528726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.528757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.528928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.529120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.529151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.529348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.529512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.529540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.529709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.529978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.530030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.530174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.530361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.530401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.530540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.530698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.530725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.530899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.531364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.531695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.531848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.532026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.532305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.532681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.532902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.533057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.533182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.533208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.533389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.533626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.533659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.533813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.533985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.534014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.411 [2024-04-24 05:26:44.534180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.534336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.411 [2024-04-24 05:26:44.534362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.411 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.534536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.534796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.534823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.535023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.535221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.535271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.535436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.535562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.535588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.535775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.535968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.536015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.536185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.536333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.536360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.536508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.536689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.536718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.536900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.537206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.537481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.537806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.537997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.538023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.538191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.538371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.538399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.538576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.538752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.538778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.538931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.539294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.539657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.539834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.539982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.540345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.540663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.540833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.541002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.541315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.541641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.541797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.541923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.542276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.542653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.412 [2024-04-24 05:26:44.542808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.412 qpair failed and we were unable to recover it. 00:31:07.412 [2024-04-24 05:26:44.542962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.543261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.543572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.543743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.543944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.544256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.544593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.544771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.544958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.545344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.545626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.545792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.545942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.546353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.546897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.547052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.547379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.547696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.547869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.548022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.548342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.548729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.548945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.549074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.549368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.549726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.549914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.550088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.550370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.550726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.550910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.551125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.551300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.551325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.551474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.551648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.551693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.551849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.551990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.552020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.552188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.552345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.552370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.413 [2024-04-24 05:26:44.552523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.552665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.413 [2024-04-24 05:26:44.552708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.413 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.552886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.553208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.553510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.553842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.553991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.554134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.554481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.554812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.554962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.555079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.555366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.555712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.555922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.556053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.556381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.556725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.556921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.557117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.557286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.557314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.557500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.557677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.557703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.557857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.558164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.558558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.558736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.558909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.559241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.559533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.559706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.559852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.560192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.560563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.560780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.560905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.561206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.561555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.561736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.561912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.562069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.562095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.414 qpair failed and we were unable to recover it. 00:31:07.414 [2024-04-24 05:26:44.562221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.414 [2024-04-24 05:26:44.562346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.562373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.562532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.562660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.562687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.562810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.562962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.562989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.563147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.563295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.563333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.563485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.563690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.563752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.563948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.564142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.564184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.564413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.564638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.564676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.564861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.565181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.565590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.565803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.565975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.566311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.566685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.566900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.567122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.567338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.567380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.567588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.567817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.567860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.568080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.568342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.568401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.568642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.568811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.568839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.568962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.569287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.569668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.569871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.570046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.570374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.570762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.570971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.571147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.571364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.571422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.571615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.571823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.571862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.572068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.572371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.572427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.572593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.572832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.572870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.573023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.573194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.573232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.573416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.573594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.573620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.415 qpair failed and we were unable to recover it. 00:31:07.415 [2024-04-24 05:26:44.573802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.415 [2024-04-24 05:26:44.573926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.573952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.574075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.574269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.574298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.574462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.574614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.574671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.574811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.575175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.575530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.575854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.575989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.576031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.576207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.576354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.576392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.576541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.576737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.576779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.576991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.577385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.577714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.577924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.578076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.578396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.578767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.578990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.579127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.579284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.579311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.579471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.579681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.579713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.579833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.579980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.580160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.580505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.580846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.580984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.581176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.581499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.581791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.581965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.582089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.582406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.582765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.582958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.583126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.583482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.416 qpair failed and we were unable to recover it. 00:31:07.416 [2024-04-24 05:26:44.583814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.416 [2024-04-24 05:26:44.583995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.584217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.584338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.584364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.584544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.584721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.584749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.584893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.585231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.585598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.585783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.585936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.586269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.586692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.586845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.586997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.587329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.587654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.587803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.587954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.588280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.588604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.588796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.588987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.589400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.589764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.589913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.590035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.590439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.590823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.590992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.591141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.591260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.591285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.591499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.591669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.591695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.591847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.592044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.592100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.592264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.592429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.592457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.417 qpair failed and we were unable to recover it. 00:31:07.417 [2024-04-24 05:26:44.592698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.417 [2024-04-24 05:26:44.592849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.592875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.593052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.593392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.593696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.593879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.594026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.594384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.594781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.594946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.595112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.595460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.595793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.595968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.596165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.596363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.596415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.596589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.596737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.596767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.596928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.597236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.597535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.597717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.597849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.598258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.598618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.598818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.598993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.599370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.599677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.599829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.599970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.600288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.600650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.600827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.600978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.601280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.601572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.601745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.601873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.602231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.418 qpair failed and we were unable to recover it. 00:31:07.418 [2024-04-24 05:26:44.602552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-04-24 05:26:44.602718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.602841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.602962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.602987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.603166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.603314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.603339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.603517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.603744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.603866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.604266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.604557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.604858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.604980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.605008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.605161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.605315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.605341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.605490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.605652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.605682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.605843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.606210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.606542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.606748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.606867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.607198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.607552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.607705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.607851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.608227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.608562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.608740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.608897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.609236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.609596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.609788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.609954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.610330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.610622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.610809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.610957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.611304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.611690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.611872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.612066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.612219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.612276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.419 [2024-04-24 05:26:44.612446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.612582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-04-24 05:26:44.612610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.419 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.612817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.612960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.612985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.613103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.613444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.613766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.613945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.614094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.614270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.614313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.614476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.614654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.614702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.614844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.614986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.615179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.615525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.615829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.615978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.616021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.616233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.616354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.616379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.616577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.616751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.616778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.616902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.617225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.617518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.617807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.617993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.618166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.618304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.618332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.618528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.618709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.618738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.618902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.619248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.619574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.619719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.619849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.620262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.620593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.620793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.620971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.621242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.621595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.621793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.621944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.622300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.622640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.622832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.420 [2024-04-24 05:26:44.622997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.623152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-04-24 05:26:44.623195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.420 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.623338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.623459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.623484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.623641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.623769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.623794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.623947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.624247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.624569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.624773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.624939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.625278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.625671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.625873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.626060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.626387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.626733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.626896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.627051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.627197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.627227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.627430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.627646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.627676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.627849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.628179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.628493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.628710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.628862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.629189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.629571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.629833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.629997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.630383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.630746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.630920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.631077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.631402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.631803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.631957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.632123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.632293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.632321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.632479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.632635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.421 [2024-04-24 05:26:44.632661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.421 qpair failed and we were unable to recover it. 00:31:07.421 [2024-04-24 05:26:44.632787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.632954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.632983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.422 [2024-04-24 05:26:44.633225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.633429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.633456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.422 [2024-04-24 05:26:44.633582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.633718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.633746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.422 [2024-04-24 05:26:44.633976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.422 [2024-04-24 05:26:44.634395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.422 [2024-04-24 05:26:44.634779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.422 [2024-04-24 05:26:44.634930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.422 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.635109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.635485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.635785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.635952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.636102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.636247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.636287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.636445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.636593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.636619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.636823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.637218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.637572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.637724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.637877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.638271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.638690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.638865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.639010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.639342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.639691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.639865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.640060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.640386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.640736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.640968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.641144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.641330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.641383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.641525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.641717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.641746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.706 qpair failed and we were unable to recover it. 00:31:07.706 [2024-04-24 05:26:44.641927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.706 [2024-04-24 05:26:44.642076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.642101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.642278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.642423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.642448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.642632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.642782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.642807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.642929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.643158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.643211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.643417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.643594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.643619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.643777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.644241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.644823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.645001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.645167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.645194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.645367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.645564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.645593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.645808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.645978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.646008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.646207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.646407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.646433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.646600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.646797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.646822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.646954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.647333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.647697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.647868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.648031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.648418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.648746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.648923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.649074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.649393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.649756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.649947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.650145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.650319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.650368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.650533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.650717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.650749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.650872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.651202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.651591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.651815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.651945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.652075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.652101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.652267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.652452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.652481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.707 qpair failed and we were unable to recover it. 00:31:07.707 [2024-04-24 05:26:44.652610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.707 [2024-04-24 05:26:44.652790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.652819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.653021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.653271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.653319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.653509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.653705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.653744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.653895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.654236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.654701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.654905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.655127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.655393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.655435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.655638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.655833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.655874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.656081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.656433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.656763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.656963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.657127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.657484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.657771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.657961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.658114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.658306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.658349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.658505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.658729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.658767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.658969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.659340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.659691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.659915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.660092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.660400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.660674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.660872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.661033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.661375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.661749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.661988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.662172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.662466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.662751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.662921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.663057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.663239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.663267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.708 qpair failed and we were unable to recover it. 00:31:07.708 [2024-04-24 05:26:44.663445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.708 [2024-04-24 05:26:44.663655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.663684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.663828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.663971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.663999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.664144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.664419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.664778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.664940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.665137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.665470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.665790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.665962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.666152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.666306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.666332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.666501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.666670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.666716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.666870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.667222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.667516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.667698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.667842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.668193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.668558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.668732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.668878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.669225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.669616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.669840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.670025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.670378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.670735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.670914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.671071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.671247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.671290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.671480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.671644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.671673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.671863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.672205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.672538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.672711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.672889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.673017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.673043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.673214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.673378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.673406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.709 qpair failed and we were unable to recover it. 00:31:07.709 [2024-04-24 05:26:44.673575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.709 [2024-04-24 05:26:44.673723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.673754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.673890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.674320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.674702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.674876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.674997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.675294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.675654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.675847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.676041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.676420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.676704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.676856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.677005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.677315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.677672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.677823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.677997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.678367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.678657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.678814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.678985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.679310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.679624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.679824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.679991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.680323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.710 [2024-04-24 05:26:44.680638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.710 [2024-04-24 05:26:44.680810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.710 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.681000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.681283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.681553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.681835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.681976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.682139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.682278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.682306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.682504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.682682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.682708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.682843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.683211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.683509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.683792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.683966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.684088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.684437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.684794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.684999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.685115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.685442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.685831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.685970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.686096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.686400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.686667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.686844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.686967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.687287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.687657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.687806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.687956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.688123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.688151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.711 [2024-04-24 05:26:44.688330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.688455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.711 [2024-04-24 05:26:44.688480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.711 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.688671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.688816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.688842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.688998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.689315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.689650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.689833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.689973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.690363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.690694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.690897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.691068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.691455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.691760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.691913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.692031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.692398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.692737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.692927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.693078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.693466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.693750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.693912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.694058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.694754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.694979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.695151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.695327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.695353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.712 [2024-04-24 05:26:44.695505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.695658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.712 [2024-04-24 05:26:44.695684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.712 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.695811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.695960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.695985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.696183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.696344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.696372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.696536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.696729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.696755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.696874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.697249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.697531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.697688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.697878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.698235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.698581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.698802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.698946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.699289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.699637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.699842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.699976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.700298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.700611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.700772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.700931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.701256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.701578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.701748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.701913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.702303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.702654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.702807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.702929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.703117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.703143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.713 [2024-04-24 05:26:44.703300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.703512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.713 [2024-04-24 05:26:44.703540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.713 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.703704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.703882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.703909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.704064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.704369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.704715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.704929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.705098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.705493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.705817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.705970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.706149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.706291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.706319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.706482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.706660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.706690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.706835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.706996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.707022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.707207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.707366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.707396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.707612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.707741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.707767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.707887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.708210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.708601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.708772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.708933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.709282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.709642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.709804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.709928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.710268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.710663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.710895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.711039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.711415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.711742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.711884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.712058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.712217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.712246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.714 qpair failed and we were unable to recover it. 00:31:07.714 [2024-04-24 05:26:44.712390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.714 [2024-04-24 05:26:44.712538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.712563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.712706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.712875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.712903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.713096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.713406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.713740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.713895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.714048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.714397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.714735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.714930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.715098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.715433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.715740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.715940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.716127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.716523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.716796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.716971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.717096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.717380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.717682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.717833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.717998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.718362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.718706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.718884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.719006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.719332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.719664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.719834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.715 qpair failed and we were unable to recover it. 00:31:07.715 [2024-04-24 05:26:44.720008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.715 [2024-04-24 05:26:44.720174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.720203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.720370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.720534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.720562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.720736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.720864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.720890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.721075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.721444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.721766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.721968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.722491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.722797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.722970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.723145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.723278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.723304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.723481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.723657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.723701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.723855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.724233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.724533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.724737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.724943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.725224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.725550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.725784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.725924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.726243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.726575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.726757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.726873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.727026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.727051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.716 qpair failed and we were unable to recover it. 00:31:07.716 [2024-04-24 05:26:44.727202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.727323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.716 [2024-04-24 05:26:44.727349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.727469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.727586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.727612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.727810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.727967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.727995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.728152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.728310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.728338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.728514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.728672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.728699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.728842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.728991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.729143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.729461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.729816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.729963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.730147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.730328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.730355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.730528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.730698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.730724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.730903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.731206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.731503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.731703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.731868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.732242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.732610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.732768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.732922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.733276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.733663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.733816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.734291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.734658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.734857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.735013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.735362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.735667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.735876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.736003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.736344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.736679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.736835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.717 [2024-04-24 05:26:44.736974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.737165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.717 [2024-04-24 05:26:44.737194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.717 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.737394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.737550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.737577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.737701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.737878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.737904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.738060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.738434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.738775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.738968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.739134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.739501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.739798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.739945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.740063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.740429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.740748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.740951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.741160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.741524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.741812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.741979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.742145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.742292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.742317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.742491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.742665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.742694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.742864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.742995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.743023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.743190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.743343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.743371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.743528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.743701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.743727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.743872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.744236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.744532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.744850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.744998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.745193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.745398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.745424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.745620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.745767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.745796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.745991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.746257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.746589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.746816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.718 qpair failed and we were unable to recover it. 00:31:07.718 [2024-04-24 05:26:44.746959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.747119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.718 [2024-04-24 05:26:44.747147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.747353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.747573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.747764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.747921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.747948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.748093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.748408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.748794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.748955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.749147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.749291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.749317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.749476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.749695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.749848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.750157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.750575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.750752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.750886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.751191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.751582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.751736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.751888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.752213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.752607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.752755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.752903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.753228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.753530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.753705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.753858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.754213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.754540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.754717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.754863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.755214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.755531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.755736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.755902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.756194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.756472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.756678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.719 qpair failed and we were unable to recover it. 00:31:07.719 [2024-04-24 05:26:44.756830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.757007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.719 [2024-04-24 05:26:44.757036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.757226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.757422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.757450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.757620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.757787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.757814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.757933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.758226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.758526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.758683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.758875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.759252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.759591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.759816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.759937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.760255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.760620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.760816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.760972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.761305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.761645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.761790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.761936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.762237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.762612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.762807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.762974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.763288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.763644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.763814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.763952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.764345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.764682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.764853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.764990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.765374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.765709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.765862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.766014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.766338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.720 [2024-04-24 05:26:44.766723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.720 [2024-04-24 05:26:44.766905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.720 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.767032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.767454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.767794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.767950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.768113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.768481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.768794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.768992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.769161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.769385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.769437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.769646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.769788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.769813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.769941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.770341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.770642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.770817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.770939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.771323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.771747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.771946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.772099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.772415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.772691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.772833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.772979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.773265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.773602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.773765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.773961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.774321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.774654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.774856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.775017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.775136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.775162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.721 qpair failed and we were unable to recover it. 00:31:07.721 [2024-04-24 05:26:44.775336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.721 [2024-04-24 05:26:44.775509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.775534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.775667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.775797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.775823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.775961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.776278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.776652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.776869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.776998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.777314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.777683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.777867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.777999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.778387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.778778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.778922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.779068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.779403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.779752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.779909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.780032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.780432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.780774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.780947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.781071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.781353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.781718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.781868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.782041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.782334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.782655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.782828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.782951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.783246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.783594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.783778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.783924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.784293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.784637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.784815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.722 qpair failed and we were unable to recover it. 00:31:07.722 [2024-04-24 05:26:44.784933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.722 [2024-04-24 05:26:44.785104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.785132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.785300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.785466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.785494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.785684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.785842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.785868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.786008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.786310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.786641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.786804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.786959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.787274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.787640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.787838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.788012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.788369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.788736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.788879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.789030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.789356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.789659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.789862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.790002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.790324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.790708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.790903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.791048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.791322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.791619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.791856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.792047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.792401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.792721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.792870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.793074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.793417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.793757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.793955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.794087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.794283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.794308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.794473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.794613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.794650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.723 qpair failed and we were unable to recover it. 00:31:07.723 [2024-04-24 05:26:44.794832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.794982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.723 [2024-04-24 05:26:44.795008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.795133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.795448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.795732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.795884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.796047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.796461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.796802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.796998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.797160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.797310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.797335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.797463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.797618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.797657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.797839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.798200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.798588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.798819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.798952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.799285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.799589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.799744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.799948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.800297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.800586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.800794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.800956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.801255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.801655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.801884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.802067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.802340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.802720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.802872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.803021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.803380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.803718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.803869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.804017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.804137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.804179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.804353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.804510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.804538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.724 qpair failed and we were unable to recover it. 00:31:07.724 [2024-04-24 05:26:44.804715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.724 [2024-04-24 05:26:44.804841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.804884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.805056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.805663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.805693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.805825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.806221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.806575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.806723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.806898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.807229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.807581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.807761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.807948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.808216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.808509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.808840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.808989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.809171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.809492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.809826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.809990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.810115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.810460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.810749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.810903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.811058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.811360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.811708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.811883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.812008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.812301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.812626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.812819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.812976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.813095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.813120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.725 qpair failed and we were unable to recover it. 00:31:07.725 [2024-04-24 05:26:44.813267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.813418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.725 [2024-04-24 05:26:44.813442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.813553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.813706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.813730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.813871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.813984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.814164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.814477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.814790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.814948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.815100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.815411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.815738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.815892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.816017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.816305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.816635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.816791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.816921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.817194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.817500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.817803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.817960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.818090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.818436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.818793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.818936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.819065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.819356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.819621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.819798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.819922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.820265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.820567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.820850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.820984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.821009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.726 qpair failed and we were unable to recover it. 00:31:07.726 [2024-04-24 05:26:44.821157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.726 [2024-04-24 05:26:44.821290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.821314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.821448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.821636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.821661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.821808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.821942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.821968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.822115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.822380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.822746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.822886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.823028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.823358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.823710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.823862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.824018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.824325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.824664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.824840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.825027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.825349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.825651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.825791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.825928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.826269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2021311 Killed "${NVMF_APP[@]}" "$@" 00:31:07.727 [2024-04-24 05:26:44.826574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.826737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.826868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 05:26:44 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:07.727 [2024-04-24 05:26:44.826995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 05:26:44 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:07.727 05:26:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:07.727 [2024-04-24 05:26:44.827169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 05:26:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:07.727 [2024-04-24 05:26:44.827298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 05:26:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.727 [2024-04-24 05:26:44.827476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.827801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.827952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.828137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.828485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.828843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.828990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.727 [2024-04-24 05:26:44.829138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.829284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.727 [2024-04-24 05:26:44.829310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.727 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.829468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.829579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.829604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.829658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f9970 (9): Bad file descriptor 00:31:07.728 [2024-04-24 05:26:44.829852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.830019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.830049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.830210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.830342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 05:26:44 -- nvmf/common.sh@470 -- # nvmfpid=2021767 00:31:07.728 [2024-04-24 05:26:44.830369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 05:26:44 -- nvmf/common.sh@471 -- # waitforlisten 2021767 00:31:07.728 [2024-04-24 05:26:44.830555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 05:26:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:07.728 05:26:44 -- common/autotest_common.sh@817 -- # '[' -z 2021767 ']' 00:31:07.728 [2024-04-24 05:26:44.830736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.830766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 05:26:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.728 05:26:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:07.728 [2024-04-24 05:26:44.830909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 05:26:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.728 [2024-04-24 05:26:44.831048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 05:26:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:07.728 05:26:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.728 [2024-04-24 05:26:44.831233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.831522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.831838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.831999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.832149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.832428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.832701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.832884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.833022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.833342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.833642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.833790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.833917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.834280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.834568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.834736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.728 [2024-04-24 05:26:44.834894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.835050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.728 [2024-04-24 05:26:44.835083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.728 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.835221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.835374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.835400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.835556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.835695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.835722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.835842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.836218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.836572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.836753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.836880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.837328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.837690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.837842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.838060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.838427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.838759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.838935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.839081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.839446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.839763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.839902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.840103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.840512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.840823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.840996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.841175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.841353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.841381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.841568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.841729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.841756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.841888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.842273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.842640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.842799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.842942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.843293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.729 [2024-04-24 05:26:44.843645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.729 [2024-04-24 05:26:44.843793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.729 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.843918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.844294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.844665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.844838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.844982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.845349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.845693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.845842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.845972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.846305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.846651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.846800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.846953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.847357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.847709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.847857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.848008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.848342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.848682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.848835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.848951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.849261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.849591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.849764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.849895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.850079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.850119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.730 [2024-04-24 05:26:44.850258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.850383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.730 [2024-04-24 05:26:44.850410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.730 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.850554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.850692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.850718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.850848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.851486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.851789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.851936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.852089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.852406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.852708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.852862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.853029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.853349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.853656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.853812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.853965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.854345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.854703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.854857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.855021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.855204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.855231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.855384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.855512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.855537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.731 qpair failed and we were unable to recover it. 00:31:07.731 [2024-04-24 05:26:44.855698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.731 [2024-04-24 05:26:44.855856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.855883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.856022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.856332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.856701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.856853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.856980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.857288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.857618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.857815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.857999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.858300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.858713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.858869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.859060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.859422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.859736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.859877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.860016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.860343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.860735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.860897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.861078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.861408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.861741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.861889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.862074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.862447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.862737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.862901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.732 [2024-04-24 05:26:44.863098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.863283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.732 [2024-04-24 05:26:44.863309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.732 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.863433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.863585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.863636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.863791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.863953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.863998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.864172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.864346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.864378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.864712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.864738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.864864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.864994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.865018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.865185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.865331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.865373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.865520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.865673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.865699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.865834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.866202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.866623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.866834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.867019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.867375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.867727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.867877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.868032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.868405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.868723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.868873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.869016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.869454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.869768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.869922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.870116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.870280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.733 [2024-04-24 05:26:44.870308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.733 qpair failed and we were unable to recover it. 00:31:07.733 [2024-04-24 05:26:44.870462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.870610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.870660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.870791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.870920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.870945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.871071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.871444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.871749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.871897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.872041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.872403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.872739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.872916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.873044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.873440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.873822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.873978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.874139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.874276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.874305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.874481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.874662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.874708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.874840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.875227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.875611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.875889] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:31:07.734 [2024-04-24 05:26:44.875959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.875972] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.734 [2024-04-24 05:26:44.876136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.876161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.876337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.876519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.876544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.876711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.876871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.876897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.877049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.877414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.877761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.877919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.878148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.878328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.878353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.734 qpair failed and we were unable to recover it. 00:31:07.734 [2024-04-24 05:26:44.878556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.734 [2024-04-24 05:26:44.878739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.878768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.878897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.879211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.879577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.879745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.879869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.880186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.880519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.880836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.880998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.881171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.881502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.881799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.881963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.882142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.882462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.882815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.882974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.883100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.883445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.883817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.883989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.884170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.884325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.884351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.884510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.884674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.884703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.884841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.885002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.885032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.735 [2024-04-24 05:26:44.885190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.885340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.735 [2024-04-24 05:26:44.885377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.735 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.885548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.885678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.885706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.885848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.885969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.885995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.886161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.886310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.886336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.886489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.886657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.886684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.886822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.886979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.887007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.887201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.887362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.887387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.887562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.887689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.887715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.887850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.888217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.888500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.888827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.888984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.889165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.889505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.889795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.889952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.890107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.890418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.890767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.890922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.891086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.891402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.891771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.891925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.892055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.892353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.892704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.892861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.736 qpair failed and we were unable to recover it. 00:31:07.736 [2024-04-24 05:26:44.893045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.736 [2024-04-24 05:26:44.893232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.893277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.893444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.893614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.893679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.893820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.893953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.893980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.894109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.894401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.894716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.894929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.895081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.895380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.895689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.895837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.895996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.896324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.896625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.896780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.896929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.897314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.897649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.897829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.898000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.898144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.898169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.737 qpair failed and we were unable to recover it. 00:31:07.737 [2024-04-24 05:26:44.898292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.898430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.737 [2024-04-24 05:26:44.898461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.898615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.898769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.898805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.898995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.899308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.899694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.899844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.900041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.900379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.900708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.900871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.901050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.901423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.901768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.901947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.902141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.902300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.902328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.902469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.902626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.902670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.902828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.902982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.903008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.903181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.903371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.903414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.903556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.903761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.903798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.903957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.904279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.904615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.904820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.904993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.905278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.905646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.905829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.905991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.906133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.738 [2024-04-24 05:26:44.906169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.738 qpair failed and we were unable to recover it. 00:31:07.738 [2024-04-24 05:26:44.906314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.906490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.906515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.906637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.906763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.906787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.906960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.907305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.907715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.907871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.908028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.908411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.908698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.908878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.909073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.909401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.909734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.909918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.910068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.910344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.910689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.910873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.911026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.911368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.911696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.911863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.912031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.912438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.912745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.912901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.913073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.913437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.913713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.913867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.739 qpair failed and we were unable to recover it. 00:31:07.739 [2024-04-24 05:26:44.914023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.739 [2024-04-24 05:26:44.914171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.914195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.914321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.914508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.914543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.740 [2024-04-24 05:26:44.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.914839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.914874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.915066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.915464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.915779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.915936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.916429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.916766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.916921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.917099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.917468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.917774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.917967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.918138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.918454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.918750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.918902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 wit[2024-04-24 05:26:44.918899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no suppoh addr=10.0.0.2, port=4420 00:31:07.740 rt for it in SPDK. Enabled only for validation. 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.919082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.919359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.919696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.919854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.920006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.920188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.920214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.920353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.920496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.920524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.740 qpair failed and we were unable to recover it. 00:31:07.740 [2024-04-24 05:26:44.920675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.740 [2024-04-24 05:26:44.920800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.920829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.921026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.921365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.921670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.921831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.921992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.922361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.922669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.922833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.923002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.923348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.923639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.923788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.923940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.924286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.924578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.924761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.924879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.925209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.925575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.925746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.925872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.926226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.926535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.926833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.926980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.927168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.927477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.927799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.927946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.741 [2024-04-24 05:26:44.928075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.928229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.741 [2024-04-24 05:26:44.928254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.741 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.928408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.928566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.928593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.928733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.928856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.928881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.929042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.929364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.929703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.929882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.930069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.930405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.930736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.930893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.931043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.931369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.931708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.931870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.932008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.932312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.932641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.932820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.932984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.933322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.933653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.933809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.933936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.934254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.934637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.934793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.742 qpair failed and we were unable to recover it. 00:31:07.742 [2024-04-24 05:26:44.934941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.935075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.742 [2024-04-24 05:26:44.935108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.935259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.935385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.935414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.935573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.935731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.935759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.935883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.936227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.936564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.936727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.936883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.937228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.937563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.937766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.937934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.938274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.938626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.938790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.938939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.939301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.939696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.939874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.940074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.940400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.940714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.940886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.941043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.941191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.941216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.743 qpair failed and we were unable to recover it. 00:31:07.743 [2024-04-24 05:26:44.941340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.743 [2024-04-24 05:26:44.941540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.941564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.941692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.941820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.941845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.941969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.942252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.942553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.942733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.942859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.943184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.943494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.943696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.943833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.944170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.944505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.944681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.944847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.945155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.945510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.945843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.945991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.946132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.946419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.946757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.946911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.947037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.947417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.947747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.744 [2024-04-24 05:26:44.947892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.744 qpair failed and we were unable to recover it. 00:31:07.744 [2024-04-24 05:26:44.948040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.948385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.948683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.948861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.949000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.949349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.949702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.949866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.950038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.950318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.950642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.950801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:07.745 [2024-04-24 05:26:44.950939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.951085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.745 [2024-04-24 05:26:44.951118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:07.745 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.951250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.951403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.951428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.951581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.951732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.951758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.951908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.952203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.952476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.952754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.952906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.953066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.953353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.953658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.953830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.953961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.954251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.954548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.954828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.954972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.955125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.955426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.955701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.955846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.955965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.956265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.022 [2024-04-24 05:26:44.956537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.022 [2024-04-24 05:26:44.956669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.022 [2024-04-24 05:26:44.956693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.022 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.956822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.956968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.956993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.957168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.957459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.957785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.957997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.958137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.958272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.958303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.958441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.958600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.958634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.958797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.958981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.959130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.959426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d54000b90 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.959721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.959887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.960020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.960301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.960644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.960795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.960948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.961226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.961521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.961841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.961991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.962149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.962450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.962742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.962891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.963073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.963377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.963735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.963913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.964032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.964371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.964731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.964878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.965012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.965162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.965188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.965341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.965466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.023 [2024-04-24 05:26:44.965491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.023 qpair failed and we were unable to recover it. 00:31:08.023 [2024-04-24 05:26:44.965711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.965852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.965876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.966005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.966277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.966661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.966836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.966965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.967266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.967565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.967740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.967866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.968159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.968456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.968811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.968981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.969409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.969701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.969849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.969995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.970302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.970580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.970732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.970857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.971184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.971484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.971823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.971992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.972120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.972449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.972737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.972915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.973063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.973390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.973687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.973855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.974019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.974166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.974191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.974325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.974494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.024 [2024-04-24 05:26:44.974519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.024 qpair failed and we were unable to recover it. 00:31:08.024 [2024-04-24 05:26:44.974674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.974793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.974819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.974967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.975300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.975598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.975767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.975915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.976213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.976554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.976841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.976992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.977143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.977491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.977833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.977980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.978139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.978412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.978731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.978881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.979002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.979311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.979614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.979788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.979938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.980264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.980580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.980767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.980921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.981182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.981479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.981780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.981923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.982097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.982399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.982751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.982925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.983049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.983171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.983195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.983339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.983489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.025 [2024-04-24 05:26:44.983514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.025 qpair failed and we were unable to recover it. 00:31:08.025 [2024-04-24 05:26:44.983645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.983770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.983796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.983945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.984239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.984544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.984813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.984955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.985086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.985393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.985731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.985876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.986000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.986302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.986600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.986790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.986936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.987292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.987585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.987738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.987864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.988172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.988446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.988776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.988929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.989111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.989386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.989688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.989834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.989989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.990304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.026 qpair failed and we were unable to recover it. 00:31:08.026 [2024-04-24 05:26:44.990698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.026 [2024-04-24 05:26:44.990876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.991030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.991329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.991623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.991779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.991954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.992245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.992518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.992860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.992988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.993133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.993435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.993772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.993919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.994038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.994379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.994697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.994877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.995002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.995321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.995609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.995775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.995923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.996220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.996571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.996746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.996895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.997180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.997499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.997821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.997994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.998141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.998411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.998745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.998895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.999046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.999307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.027 qpair failed and we were unable to recover it. 00:31:08.027 [2024-04-24 05:26:44.999674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.027 [2024-04-24 05:26:44.999821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:44.999938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.000302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.000611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.000795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.000947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.001263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.001555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.001738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.001920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.002249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.002557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.002752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.002881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.003230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.003547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.003710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.003884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.004196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.004523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.004860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.004996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.005023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.005150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.005312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.005339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.005521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.005696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.005733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.005941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.006333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.006656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.006810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.028 [2024-04-24 05:26:45.006927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.007101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.028 [2024-04-24 05:26:45.007126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.028 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.007249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.007376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.007402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.007563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.007704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.007730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.007886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.008168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.008486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.008790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.008974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.009156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.009499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.009832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.009981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.010139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.010497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.010817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.010983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.011120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.011400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.011723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.011897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.012021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.012289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.012614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.012777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.012939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.013274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.013596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.013787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.013915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.014234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.014536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.014790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.014970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.015148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.015466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.015779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.015957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.029 [2024-04-24 05:26:45.016099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.016218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.029 [2024-04-24 05:26:45.016243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.029 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.016399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.016547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.016572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.016715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.016869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.016893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.017023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.017399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.017690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.017843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.017995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.018325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.018639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.018815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.018942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.019249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.019547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.019688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.019842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.020183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.020483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.020796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.020970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.021098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.021423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.021730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.021870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.022003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.022360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.022665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.022837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.022991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.023307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.023579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.023733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.023861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.024191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.024504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.024663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.024851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.025004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.025028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.025150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.025305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.030 [2024-04-24 05:26:45.025330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.030 qpair failed and we were unable to recover it. 00:31:08.030 [2024-04-24 05:26:45.025457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.025605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.025665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.025812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.025935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.025961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.026113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.026445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.026814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.026969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.027100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.027421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.027698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.027873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.028039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.028366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.028686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.028856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.029001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.029324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.029633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.029805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.029934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.030206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.030529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.030731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.030862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.031177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.031450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.031798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.031952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.032097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.032401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.032769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.032922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.033040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.033332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.033625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.033779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.033937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.034112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.034137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.034254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.034406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.034430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.031 qpair failed and we were unable to recover it. 00:31:08.031 [2024-04-24 05:26:45.034557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.031 [2024-04-24 05:26:45.034691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.034717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.034888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.035182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.035491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.035845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.035990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.036135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.036407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.036702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.036873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.037002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.037332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.037638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.037794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.037944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.038241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.038568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.038752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.038879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.039216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.039518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.039711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.039866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.040184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.040481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.040836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.040985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.041187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.041510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.041838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.041981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.042107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.042389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.042704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.032 [2024-04-24 05:26:45.042851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.032 qpair failed and we were unable to recover it. 00:31:08.032 [2024-04-24 05:26:45.043010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.043281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.043604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.043789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.043943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.044240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.044575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.044750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.044906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.045189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.045539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.045718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.045892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.046191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.046527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.046837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.046988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.047109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.047397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.047546] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.033 [2024-04-24 05:26:45.047586] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.033 [2024-04-24 05:26:45.047602] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.033 [2024-04-24 05:26:45.047622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.033 [2024-04-24 05:26:45.047645] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.033 [2024-04-24 05:26:45.047701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:08.033 [2024-04-24 05:26:45.047759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:08.033 [2024-04-24 05:26:45.047852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.047876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 wit[2024-04-24 05:26:45.047795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:08.033 h addr=10.0.0.2, port=4420 00:31:08.033 [2024-04-24 05:26:45.047798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.048007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.048303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.048603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.048796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.048928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.049204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.049518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.049801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.049952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.050103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.050251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.033 [2024-04-24 05:26:45.050275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.033 qpair failed and we were unable to recover it. 00:31:08.033 [2024-04-24 05:26:45.050416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.050564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.050588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.050754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.050905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.050938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.051070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.051350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.051712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.051858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.052034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.052333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.052604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.052786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.052938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.053229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.053536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.053720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.053847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.054173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.054445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.054727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.054906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.055067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.055374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.055663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.055843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.055978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.056287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.056580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.056765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.056904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.057262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.057550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.057703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.057858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.058177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.058461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.058786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.058936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.059063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.059190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.034 [2024-04-24 05:26:45.059215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.034 qpair failed and we were unable to recover it. 00:31:08.034 [2024-04-24 05:26:45.059371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.059522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.059546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.059682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.059896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.059929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.060044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.060348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.060650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.060862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.061000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.061362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.061689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.061834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.061978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.062283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.062590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.062767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.062900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.063214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.063473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.063752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.063922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.064078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.064392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.064674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.064843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.064986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.065287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.065584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.065737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.065886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.066236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.066515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.066852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.066981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.067154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.067457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.067777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.067920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.068048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.068191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.068215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.035 qpair failed and we were unable to recover it. 00:31:08.035 [2024-04-24 05:26:45.068348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.035 [2024-04-24 05:26:45.068492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.068516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.068644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.068778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.068803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.068930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.069214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.069553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.069863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.069979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.070140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.070469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.070755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.070937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.071063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.071330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.071683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.071831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.072018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.072338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.072684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.072840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.072980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.073250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.073519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.073795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.073975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.074090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.074402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.074699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.074842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.074977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.075246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.075611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.075765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.075903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.076208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.076518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.076830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.076979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.077003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.036 qpair failed and we were unable to recover it. 00:31:08.036 [2024-04-24 05:26:45.077139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.036 [2024-04-24 05:26:45.077290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.077315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.077464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.077591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.077621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.077759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.077879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.077904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.078039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.078359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.078674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.078826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.078954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.079250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.079565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.079732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.079863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.080186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.080481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.080796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.080939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.081067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.081422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.081723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.081862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.082009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.082328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.082615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.082766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.082903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.083242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.083500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.083803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.083941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.084096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.084230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.084255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.084406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.084544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.037 [2024-04-24 05:26:45.084569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.037 qpair failed and we were unable to recover it. 00:31:08.037 [2024-04-24 05:26:45.084700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.084850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.084875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.084996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.085273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.085544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.085728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.085854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.086167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.086465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.086800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.086999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.087173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.087456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.087740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.087879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.088107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.088471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.088783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.088925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.089081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.089376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.089715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.089898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.090014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.090310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.090576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.090767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.090906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.091168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.091468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.091761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.091940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.092076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.092396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.092686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.092846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.092992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.093170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.093194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.093318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.093443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.093468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.038 qpair failed and we were unable to recover it. 00:31:08.038 [2024-04-24 05:26:45.093597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.038 [2024-04-24 05:26:45.093727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.093752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.093872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.094185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.094483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.094790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.094942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.095068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.095353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.095638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.095780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.095925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.096235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.096499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.096817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.096964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.097096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.097392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.097681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.097830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.097960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.098227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.098513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.098804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.098989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.099130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.099397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.099699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.099849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.099996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.100279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.100597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.100757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.100892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.101197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.101498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.101788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.101954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.039 qpair failed and we were unable to recover it. 00:31:08.039 [2024-04-24 05:26:45.102104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.039 [2024-04-24 05:26:45.102218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.102243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.102399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.102550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.102696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.102818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.102843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.103020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.103350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.103648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.103811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.103942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.104211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.104477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.104761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.104909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.105056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.105402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.105676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.105823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.105938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.106223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.106494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.106793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.106953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.107100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.107437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.107760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.107912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.108089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.108360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.108656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.108831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.109000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.109317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.109600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.109778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.109909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.110203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.110475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.040 qpair failed and we were unable to recover it. 00:31:08.040 [2024-04-24 05:26:45.110760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.040 [2024-04-24 05:26:45.110908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.111031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.111305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.111604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.111783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.111911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.112195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.112509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.112790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.112945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.113065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.113370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.113642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.113812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.113934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.114237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.114547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.114859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.114998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.115121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.115419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.115697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.115884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.116011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.116327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.116668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.116841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.117001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.117292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.117558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.117842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.117987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.118151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.118432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.118714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.118898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.119054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.119189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.119213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.041 qpair failed and we were unable to recover it. 00:31:08.041 [2024-04-24 05:26:45.119379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.119534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.041 [2024-04-24 05:26:45.119558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.119692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.119887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.119912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.120029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.120298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.120561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.120851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.120989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.121165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.121431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.121737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.121902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.122052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.122319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.122662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.122825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.122955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.123270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.123543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.123698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.123842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.124216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.124511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.124985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.125128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.125462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.125761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.125914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.126037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.126304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.126598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.042 [2024-04-24 05:26:45.126759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.042 qpair failed and we were unable to recover it. 00:31:08.042 [2024-04-24 05:26:45.126910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.127173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.127469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.127775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.127916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.128068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.128357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.128651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.128829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.128993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.129298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.129593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.129741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.129867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.130225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.130522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.130792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.130959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.131090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.131384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.131675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.131828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.131978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.132239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.132548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.132818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.132993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.133153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.133433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.133725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.133870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.134002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.134366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.134661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.134820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.134938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.135058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.135083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.043 qpair failed and we were unable to recover it. 00:31:08.043 [2024-04-24 05:26:45.135214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.135361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.043 [2024-04-24 05:26:45.135385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.135559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.135718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.135743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.135905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.136204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.136473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.136767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.136958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.137104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.137433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.137731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.137898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.138029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.138344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.138638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.138816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.138936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.139255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.139574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.139842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.139993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.140175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.140468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.140741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.140912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.141036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.141364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.141681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.141820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.141969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.142246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.142538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.142850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.142997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.143151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.143474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.143774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.143912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.044 [2024-04-24 05:26:45.144076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.144192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.044 [2024-04-24 05:26:45.144217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.044 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.144334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.144478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.144503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.144616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.144778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.144803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.144939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.145192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.145483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.145792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.145956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.146081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.146367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.146663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.146808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.146941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.147303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.147594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.147872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.147995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.148166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.148432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.148795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.148937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.149096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.149391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.149643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.149793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.149918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.150177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.150483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.150775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.150917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.151036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.151351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.151613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.151765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.151918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.152208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.152488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.045 [2024-04-24 05:26:45.152668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.045 qpair failed and we were unable to recover it. 00:31:08.045 [2024-04-24 05:26:45.152799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.152921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.152946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.153068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.153359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.153653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.153801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.153914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.154175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.154438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.154731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.154872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.155006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.155278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.155534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.155844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.155988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.156140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.156398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.156664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.156812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.156962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.157249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.157566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.157743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.157886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.158165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.158445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.158724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.158874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.158998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.159283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.046 [2024-04-24 05:26:45.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.046 [2024-04-24 05:26:45.159739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.046 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.159889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.160151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.160415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.160716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.160869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.160991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.161264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.161605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.161871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.161985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.162135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.162429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.162710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.162872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.162987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.163277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.163552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.163862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.163991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.164141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.164434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.164750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.164898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.165047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.165357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.165649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.165786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.165910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.166210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.166512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.166797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.166940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.167065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.167337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.167651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.167802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.167928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.168056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.047 [2024-04-24 05:26:45.168081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.047 qpair failed and we were unable to recover it. 00:31:08.047 [2024-04-24 05:26:45.168201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.168476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.168800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.168957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.169083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.169352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.169620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.169799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.169950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.170244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.170545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.170825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.170975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.171120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.171399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.171675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.171832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.171978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.172262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.172541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.172802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.172948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.173089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.173402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.173687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.173861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.173992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 05:26:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:08.048 [2024-04-24 05:26:45.174272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 05:26:45 -- common/autotest_common.sh@850 -- # return 0 00:31:08.048 [2024-04-24 05:26:45.174401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 05:26:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:08.048 [2024-04-24 05:26:45.174552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 05:26:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:08.048 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.048 [2024-04-24 05:26:45.174697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.174841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.174990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.175127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.175458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.175796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.175940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.176073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.176224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.176249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.048 qpair failed and we were unable to recover it. 00:31:08.048 [2024-04-24 05:26:45.176371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.048 [2024-04-24 05:26:45.176488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.176513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.176627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.176760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.176785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.176909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.177296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.177578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.177738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.177883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.178184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.178473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.178765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.178927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.179048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.179319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.179615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.179784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.179934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.180198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.180489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.180778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.180954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.181083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.181408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.181736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.181897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.182053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.182406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.182685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.182844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.182972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.183225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.183562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.183723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.183889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.184183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.184521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.184842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.184984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-04-24 05:26:45.185107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-04-24 05:26:45.185218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.185247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.185426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.185553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.185577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.185727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.185846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.185870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.185997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.186262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.186576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.186865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.186990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.187157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.187464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.187814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.187979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.188121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.188423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.188731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.188912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.189052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.189318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.189605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.189772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.189907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.190206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.190538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.190831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.190980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 05:26:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.050 05:26:45 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:08.050 [2024-04-24 05:26:45.191135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.050 [2024-04-24 05:26:45.191254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.191279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.050 [2024-04-24 05:26:45.191404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.191538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.191563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.191693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.191869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.191894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.192061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.192330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.192638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.192788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.192914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.193224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-04-24 05:26:45.193531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-04-24 05:26:45.193752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.193897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.194217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.194489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.194794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.194968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.195100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.195357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.195634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.195792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.195914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.196194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.196515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.196789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.196932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.197081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.197382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.197749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.197897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.198067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.198340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.198616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.198766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.198898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.199207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.199536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.199846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.199977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.200127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.200427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.200733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.200924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-04-24 05:26:45.201082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.201214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-04-24 05:26:45.201239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.201368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.201517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.201541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.201677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.201797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.201821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.201951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.202248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.202536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.202709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.202857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.203191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.203479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.203851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.203986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.204165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.204453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.204782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.204932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.205079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.205377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.205642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.205842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.205968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.206274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.206574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.206855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.206990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.207170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.207438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.207736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.207890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.208021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.208286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.208559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.208736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.208875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.209198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.209516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-04-24 05:26:45.209814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-04-24 05:26:45.209972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.209997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.210124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.210432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.210729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.210901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.211040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.211378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.211646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.211822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.211998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.212267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.212564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.212848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.212999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.213023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.213170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 Malloc0 00:31:08.053 [2024-04-24 05:26:45.213292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.213317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.213441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.213568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.213592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.053 [2024-04-24 05:26:45.213726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 05:26:45 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:08.053 [2024-04-24 05:26:45.213844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.213869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.053 [2024-04-24 05:26:45.214034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.053 [2024-04-24 05:26:45.214153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.214305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.214576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.214867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.214998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.215140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.215452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.215760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.215918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.216065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.216409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.216727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.216865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.216932] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.053 [2024-04-24 05:26:45.217018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.217310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.217582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.217760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.217902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.218024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.218049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-04-24 05:26:45.218171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.218295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-04-24 05:26:45.218319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.218441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.218558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.218582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.218721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.218842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.218866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.219018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.219286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.219587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.219799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.219923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.220216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.220510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.220786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.220923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.221055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.221357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.221620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.221771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.221890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.222192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.222464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.222753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.222919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.223067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.223356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.223682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.223827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.223958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.224257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.224524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.224824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.224980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.225092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.054 [2024-04-24 05:26:45.225218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.225243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 05:26:45 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.054 [2024-04-24 05:26:45.225368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.054 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.054 [2024-04-24 05:26:45.225533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.225558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.225690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.225843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.225867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.226021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.226144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.226169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.226290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.226409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-04-24 05:26:45.226434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-04-24 05:26:45.226556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.226713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.226738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.226863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.227191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.227465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.227784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.227954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.228112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.228347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.228373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.228495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.228650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.228676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.228845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.228992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.229166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.229450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.229755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.229936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.230094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.230471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6d64000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.230813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.230962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.231115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.231434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.231732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.231883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.232049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.232313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.232570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.232740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.232865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.055 [2024-04-24 05:26:45.233207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 05:26:45 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.055 [2024-04-24 05:26:45.233329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.055 [2024-04-24 05:26:45.233510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.055 [2024-04-24 05:26:45.233635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.233796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-04-24 05:26:45.233953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-04-24 05:26:45.234101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.234394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.234679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.234834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.234958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.235223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.235521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.235788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.235937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.236071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.236347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.236643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.236803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.236929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.237224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.237503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.237778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.237923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.238046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.238312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.238580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.238730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.238880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.239159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.239442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.239707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.239851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.239971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.240246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.240541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.240811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.240953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.241099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.241219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.056 [2024-04-24 05:26:45.241244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 05:26:45 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.056 [2024-04-24 05:26:45.241381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.056 [2024-04-24 05:26:45.241523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.241548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.056 [2024-04-24 05:26:45.241707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.241829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.241853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-04-24 05:26:45.241977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-04-24 05:26:45.242124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.242304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.242585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.242869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.242992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.243165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.243439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.243785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.243959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.244080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.244389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.244677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.244843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.244990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.245116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-04-24 05:26:45.245142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ebe40 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.245179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.057 [2024-04-24 05:26:45.247719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.057 [2024-04-24 05:26:45.247883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.057 [2024-04-24 05:26:45.247920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.057 [2024-04-24 05:26:45.247936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.057 [2024-04-24 05:26:45.247948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.057 [2024-04-24 05:26:45.247980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.057 05:26:45 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.057 05:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.057 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:08.057 05:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.057 05:26:45 -- host/target_disconnect.sh@58 -- # wait 2021332 00:31:08.057 [2024-04-24 05:26:45.257496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.057 [2024-04-24 05:26:45.257655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.057 [2024-04-24 05:26:45.257683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.057 [2024-04-24 05:26:45.257697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.057 [2024-04-24 05:26:45.257709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.057 [2024-04-24 05:26:45.257737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-04-24 05:26:45.267549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.057 [2024-04-24 05:26:45.267726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.057 [2024-04-24 05:26:45.267752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.057 [2024-04-24 05:26:45.267767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.057 [2024-04-24 05:26:45.267779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.057 [2024-04-24 05:26:45.267807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.277552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.277730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.277759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.277774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.277786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.277814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.287528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.287662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.287694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.287708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.287720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.287748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.297551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.297726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.297752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.297767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.297779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.297806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.307559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.307697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.307724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.307738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.307750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.307778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.317583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.317719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.317745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.317759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.317776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.317804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.327622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.327801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.327826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.327841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.327852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.327880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.337620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.337755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.337782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.337796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.337808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.337836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.347683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.347802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.347827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.347842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.347854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.347881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.317 [2024-04-24 05:26:45.357695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.317 [2024-04-24 05:26:45.357824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.317 [2024-04-24 05:26:45.357850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.317 [2024-04-24 05:26:45.357864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.317 [2024-04-24 05:26:45.357876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.317 [2024-04-24 05:26:45.357903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.317 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.367765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.367891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.367918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.367932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.367944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.367971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.377801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.377944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.377970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.377984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.377996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.378022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.387898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.388026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.388053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.388067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.388079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.388106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.397888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.398018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.398044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.398058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.398070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.398097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.407853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.408003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.408028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.408049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.408062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.408089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.417903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.418029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.418055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.418069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.418082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.418111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.428008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.428149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.428174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.428189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.428201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.428228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.438007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.438135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.438161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.438176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.438188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.438215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.448016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.448144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.448171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.448185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.448197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.448224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.457988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.458116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.458142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.458157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.458168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.458195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.468112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.468242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.468268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.468282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.468294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.468321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.478044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.478193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.478220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.478234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.478250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.478279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.488084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.488213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.488239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.318 [2024-04-24 05:26:45.488254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.318 [2024-04-24 05:26:45.488266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.318 [2024-04-24 05:26:45.488293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.318 qpair failed and we were unable to recover it. 00:31:08.318 [2024-04-24 05:26:45.498129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.318 [2024-04-24 05:26:45.498251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.318 [2024-04-24 05:26:45.498277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.498297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.498310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.498337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.508121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.508242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.508267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.508281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.508293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.508320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.518226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.518350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.518375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.518389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.518401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.518428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.528307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.528438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.528464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.528478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.528490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.528517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.538218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.538339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.538364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.538378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.538390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.538417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.548254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.548379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.548405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.548419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.548431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.548458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.558273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.558436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.558461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.558475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.558487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.558514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.568337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.568506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.568531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.568545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.568557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.568584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.319 [2024-04-24 05:26:45.578363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.319 [2024-04-24 05:26:45.578528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.319 [2024-04-24 05:26:45.578553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.319 [2024-04-24 05:26:45.578567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.319 [2024-04-24 05:26:45.578580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.319 [2024-04-24 05:26:45.578606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.319 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.588454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.588577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.588605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.588625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.588652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.588681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.598407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.598539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.598567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.598582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.598594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.598622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.608398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.608530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.608557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.608571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.608583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.608610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.618419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.618565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.618591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.618605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.618616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.618651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.628535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.628661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.628687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.628701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.628713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.628740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.638515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.638658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.638685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.638699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.638711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.638739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.648613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.648754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.648781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.648795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.648807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.648834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.658658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.658783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.658808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.658822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.658833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.658862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.579 [2024-04-24 05:26:45.668568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.579 [2024-04-24 05:26:45.668700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.579 [2024-04-24 05:26:45.668726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.579 [2024-04-24 05:26:45.668741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.579 [2024-04-24 05:26:45.668753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.579 [2024-04-24 05:26:45.668780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.579 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.678601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.678760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.678790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.678805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.678817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.678844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.688645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.688773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.688798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.688812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.688824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.688852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.698688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.698811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.698836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.698851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.698862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.698889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.708689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.708828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.708853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.708867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.708879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.708907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.718738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.718873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.718898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.718912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.718924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.718951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.728773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.728897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.728923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.728937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.728949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.728975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.738767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.738893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.738919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.738933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.738944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.738971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.748796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.748922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.748947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.748961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.748973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.749000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.758927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.759056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.759081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.759095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.759107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.759134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.768858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.768984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.769014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.769029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.769041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.769068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.778886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.779008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.779033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.779047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.779059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.779086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.788914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.789044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.789069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.789083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.789095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.789123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.798948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.799089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.799115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.799129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.580 [2024-04-24 05:26:45.799141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.580 [2024-04-24 05:26:45.799170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.580 qpair failed and we were unable to recover it. 00:31:08.580 [2024-04-24 05:26:45.808969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.580 [2024-04-24 05:26:45.809111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.580 [2024-04-24 05:26:45.809136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.580 [2024-04-24 05:26:45.809151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.581 [2024-04-24 05:26:45.809162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.581 [2024-04-24 05:26:45.809195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.581 qpair failed and we were unable to recover it. 00:31:08.581 [2024-04-24 05:26:45.818990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.581 [2024-04-24 05:26:45.819112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.581 [2024-04-24 05:26:45.819137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.581 [2024-04-24 05:26:45.819151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.581 [2024-04-24 05:26:45.819163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.581 [2024-04-24 05:26:45.819190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.581 qpair failed and we were unable to recover it. 00:31:08.581 [2024-04-24 05:26:45.829060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.581 [2024-04-24 05:26:45.829190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.581 [2024-04-24 05:26:45.829217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.581 [2024-04-24 05:26:45.829235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.581 [2024-04-24 05:26:45.829247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.581 [2024-04-24 05:26:45.829276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.581 qpair failed and we were unable to recover it. 00:31:08.581 [2024-04-24 05:26:45.839058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.581 [2024-04-24 05:26:45.839191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.581 [2024-04-24 05:26:45.839215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.581 [2024-04-24 05:26:45.839229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.581 [2024-04-24 05:26:45.839241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.581 [2024-04-24 05:26:45.839269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.581 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.849108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.849243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.849271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.849286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.849298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.849327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.859109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.859238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.859270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.859285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.859297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.859326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.869190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.869314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.869339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.869353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.869365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.869392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.879172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.879300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.879325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.879339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.879352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.879378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.889196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.889339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.889363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.889377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.889389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.889417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.899254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.899377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.899402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.899416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.899428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.899459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.909268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.909405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.909429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.909444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.909457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.909484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.919433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.919568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.919592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.919605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.919618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.919653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.929348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.929485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.929513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.929528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.929541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.929570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.939339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.939471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.939496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.840 [2024-04-24 05:26:45.939510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.840 [2024-04-24 05:26:45.939522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.840 [2024-04-24 05:26:45.939550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.840 qpair failed and we were unable to recover it. 00:31:08.840 [2024-04-24 05:26:45.949369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.840 [2024-04-24 05:26:45.949545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.840 [2024-04-24 05:26:45.949577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.949593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.949605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.949642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:45.959524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:45.959674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:45.959700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.959715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.959728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.959756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:45.969444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:45.969571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:45.969596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.969611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.969623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.969662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:45.979496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:45.979621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:45.979655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.979669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.979682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.979710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:45.989488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:45.989610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:45.989642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.989658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.989679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.989708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:45.999551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:45.999719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:45.999745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:45.999760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:45.999773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:45.999801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.009554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.009721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.009748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.009763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.009776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.009804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.019598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.019754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.019780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.019795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.019808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.019836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.029586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.029710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.029735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.029749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.029761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.029788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.039657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.039802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.039829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.039844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.039856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.039884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.049708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.049863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.049897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.049912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.049940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.049968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.059978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.060121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.060147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.060162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.060175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.060203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.069782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.069932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.069959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.069974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.069987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.070016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.079803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.841 [2024-04-24 05:26:46.079936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.841 [2024-04-24 05:26:46.079962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.841 [2024-04-24 05:26:46.079977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.841 [2024-04-24 05:26:46.079996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.841 [2024-04-24 05:26:46.080024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.841 qpair failed and we were unable to recover it. 00:31:08.841 [2024-04-24 05:26:46.089822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.842 [2024-04-24 05:26:46.089946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.842 [2024-04-24 05:26:46.089972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.842 [2024-04-24 05:26:46.089987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.842 [2024-04-24 05:26:46.090000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.842 [2024-04-24 05:26:46.090028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.842 qpair failed and we were unable to recover it. 00:31:08.842 [2024-04-24 05:26:46.099829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:08.842 [2024-04-24 05:26:46.099962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:08.842 [2024-04-24 05:26:46.099988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:08.842 [2024-04-24 05:26:46.100003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:08.842 [2024-04-24 05:26:46.100015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:08.842 [2024-04-24 05:26:46.100043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:08.842 qpair failed and we were unable to recover it. 00:31:09.100 [2024-04-24 05:26:46.109977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.100 [2024-04-24 05:26:46.110129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.100 [2024-04-24 05:26:46.110158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.100 [2024-04-24 05:26:46.110174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.100 [2024-04-24 05:26:46.110187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.100 [2024-04-24 05:26:46.110216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.100 qpair failed and we were unable to recover it. 00:31:09.100 [2024-04-24 05:26:46.119910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.100 [2024-04-24 05:26:46.120081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.100 [2024-04-24 05:26:46.120108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.100 [2024-04-24 05:26:46.120122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.100 [2024-04-24 05:26:46.120134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.100 [2024-04-24 05:26:46.120163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.100 qpair failed and we were unable to recover it. 00:31:09.100 [2024-04-24 05:26:46.129922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.100 [2024-04-24 05:26:46.130062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.100 [2024-04-24 05:26:46.130089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.100 [2024-04-24 05:26:46.130107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.100 [2024-04-24 05:26:46.130120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.100 [2024-04-24 05:26:46.130149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.100 qpair failed and we were unable to recover it. 00:31:09.100 [2024-04-24 05:26:46.139928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.100 [2024-04-24 05:26:46.140093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.100 [2024-04-24 05:26:46.140120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.100 [2024-04-24 05:26:46.140135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.100 [2024-04-24 05:26:46.140147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.100 [2024-04-24 05:26:46.140175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.100 qpair failed and we were unable to recover it. 00:31:09.100 [2024-04-24 05:26:46.150054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.150185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.150211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.150226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.150239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.150267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.160006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.160137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.160163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.160179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.160191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.160219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.170065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.170213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.170240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.170255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.170274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.170317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.180029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.180162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.180188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.180203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.180216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.180244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.190131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.190263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.190288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.190302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.190315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.190344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.200129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.200286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.200313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.200328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.200340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.200369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.210117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.210295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.210319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.210334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.210346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.210374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.220263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.220397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.220421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.220437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.220450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.220477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.230197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.230320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.230345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.230360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.230373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.230400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.240243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.240372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.240397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.240412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.240425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.240452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.250226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.250356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.250381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.250396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.250409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.250437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.260272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.260418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.260443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.260463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.260476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.260503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.270284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.270408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.270433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.270448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.270460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.101 [2024-04-24 05:26:46.270488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.101 qpair failed and we were unable to recover it. 00:31:09.101 [2024-04-24 05:26:46.280323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.101 [2024-04-24 05:26:46.280451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.101 [2024-04-24 05:26:46.280475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.101 [2024-04-24 05:26:46.280489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.101 [2024-04-24 05:26:46.280502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.280530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.290356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.290485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.290511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.290525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.290538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.290565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.300496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.300643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.300669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.300683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.300696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.300724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.310420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.310548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.310572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.310587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.310600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.310638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.320438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.320573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.320599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.320613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.320626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.320663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.330459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.330592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.330618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.330639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.330653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.330682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.340500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.340664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.340690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.340709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.340722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.340751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.350510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.350637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.350663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.350683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.350696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.350724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.102 [2024-04-24 05:26:46.360579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.102 [2024-04-24 05:26:46.360712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.102 [2024-04-24 05:26:46.360739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.102 [2024-04-24 05:26:46.360754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.102 [2024-04-24 05:26:46.360766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.102 [2024-04-24 05:26:46.360795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.102 qpair failed and we were unable to recover it. 00:31:09.361 [2024-04-24 05:26:46.370583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.361 [2024-04-24 05:26:46.370737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.361 [2024-04-24 05:26:46.370766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.361 [2024-04-24 05:26:46.370782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.361 [2024-04-24 05:26:46.370795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.361 [2024-04-24 05:26:46.370824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.361 qpair failed and we were unable to recover it. 00:31:09.361 [2024-04-24 05:26:46.380641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.361 [2024-04-24 05:26:46.380801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.361 [2024-04-24 05:26:46.380830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.361 [2024-04-24 05:26:46.380846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.361 [2024-04-24 05:26:46.380859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.361 [2024-04-24 05:26:46.380890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.361 qpair failed and we were unable to recover it. 00:31:09.361 [2024-04-24 05:26:46.390685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.361 [2024-04-24 05:26:46.390831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.361 [2024-04-24 05:26:46.390858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.361 [2024-04-24 05:26:46.390873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.361 [2024-04-24 05:26:46.390886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.361 [2024-04-24 05:26:46.390914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.361 qpair failed and we were unable to recover it. 00:31:09.361 [2024-04-24 05:26:46.400694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.361 [2024-04-24 05:26:46.400826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.361 [2024-04-24 05:26:46.400852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.361 [2024-04-24 05:26:46.400867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.361 [2024-04-24 05:26:46.400880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.361 [2024-04-24 05:26:46.400908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.361 qpair failed and we were unable to recover it. 00:31:09.361 [2024-04-24 05:26:46.410713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.361 [2024-04-24 05:26:46.410842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.361 [2024-04-24 05:26:46.410868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.410883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.410896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.410924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.420745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.420875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.420901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.420916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.420928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.420957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.430759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.430887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.430914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.430929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.430941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.430969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.440825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.440968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.440998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.441014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.441026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.441054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.450838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.450964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.450991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.451005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.451018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.451046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.460940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.461064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.461089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.461104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.461117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.461144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.470931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.471095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.471121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.471136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.471148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.471176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.481014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.481140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.481166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.481181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.481193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.481221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.491011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.491138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.491165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.491180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.491193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.491221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.500958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.501078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.501105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.501119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.501131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.501159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.510981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.511103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.511129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.511144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.511156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.511184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.521060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.521195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.521222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.521237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.521249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.521279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.531087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.531225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.531256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.531272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.531284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.362 [2024-04-24 05:26:46.531312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.362 qpair failed and we were unable to recover it. 00:31:09.362 [2024-04-24 05:26:46.541127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.362 [2024-04-24 05:26:46.541252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.362 [2024-04-24 05:26:46.541277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.362 [2024-04-24 05:26:46.541292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.362 [2024-04-24 05:26:46.541304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.541332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.551144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.551269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.551295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.551310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.551323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.551350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.561168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.561292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.561317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.561332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.561344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.561371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.571261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.571395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.571435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.571450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.571462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.571509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.581229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.581421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.581447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.581462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.581474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.581501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.591249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.591375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.591400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.591415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.591427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.591455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.601259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.601388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.601414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.601429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.601441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.601469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.611327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.611453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.611479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.611494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.611507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.611534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.363 [2024-04-24 05:26:46.621337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.363 [2024-04-24 05:26:46.621461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.363 [2024-04-24 05:26:46.621493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.363 [2024-04-24 05:26:46.621509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.363 [2024-04-24 05:26:46.621521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.363 [2024-04-24 05:26:46.621549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.363 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.631326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.631446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.631475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.631491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.631504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.631532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.641386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.641515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.641544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.641559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.641571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.641600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.651394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.651516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.651543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.651558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.651571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.651600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.661433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.661557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.661583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.661598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.661610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.661652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.671463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.671615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.671650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.671667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.671679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.671707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.681506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.681679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.681706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.681721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.681733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.681761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.691534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.691671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.691698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.691713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.691725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.691753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.701538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.701672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.701700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.701715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.701727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.701755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.711544] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.711691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.711723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.711739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.711751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.711780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.721641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.721811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.721837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.721852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.721865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.721893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-04-24 05:26:46.731696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.623 [2024-04-24 05:26:46.731835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.623 [2024-04-24 05:26:46.731862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.623 [2024-04-24 05:26:46.731876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.623 [2024-04-24 05:26:46.731889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.623 [2024-04-24 05:26:46.731917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.741655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.741801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.741827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.741842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.741854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.741882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.751771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.751898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.751924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.751939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.751958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.751987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.761717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.761871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.761897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.761912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.761924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.761951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.771716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.771855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.771882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.771897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.771910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.771938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.781754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.781890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.781917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.781932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.781944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.781972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.791831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.791958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.791983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.791998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.792011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.792038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.801890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.802032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.802058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.802074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.802086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.802114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.811845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.811991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.812020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.812036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.812048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.812092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.821909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.822078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.822105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.822120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.822132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.822160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.831926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.832052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.832078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.832093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.832106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.832134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.841943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.842072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.842097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.842111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.842129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.842160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.851947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.852110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.852137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.852152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.852165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.852194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.861999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.862129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.862156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.624 [2024-04-24 05:26:46.862171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.624 [2024-04-24 05:26:46.862183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.624 [2024-04-24 05:26:46.862211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-04-24 05:26:46.872011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.624 [2024-04-24 05:26:46.872133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.624 [2024-04-24 05:26:46.872159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.625 [2024-04-24 05:26:46.872175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.625 [2024-04-24 05:26:46.872187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.625 [2024-04-24 05:26:46.872215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-04-24 05:26:46.882068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.625 [2024-04-24 05:26:46.882198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.625 [2024-04-24 05:26:46.882222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.625 [2024-04-24 05:26:46.882237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.625 [2024-04-24 05:26:46.882249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.625 [2024-04-24 05:26:46.882276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-04-24 05:26:46.892074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.625 [2024-04-24 05:26:46.892260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.625 [2024-04-24 05:26:46.892298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.625 [2024-04-24 05:26:46.892328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.625 [2024-04-24 05:26:46.892349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.625 [2024-04-24 05:26:46.892380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.885 [2024-04-24 05:26:46.902090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.885 [2024-04-24 05:26:46.902213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.885 [2024-04-24 05:26:46.902242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.885 [2024-04-24 05:26:46.902258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.885 [2024-04-24 05:26:46.902271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.885 [2024-04-24 05:26:46.902299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.885 qpair failed and we were unable to recover it. 00:31:09.885 [2024-04-24 05:26:46.912163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.885 [2024-04-24 05:26:46.912309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.885 [2024-04-24 05:26:46.912336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.885 [2024-04-24 05:26:46.912351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.885 [2024-04-24 05:26:46.912364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.885 [2024-04-24 05:26:46.912392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.885 qpair failed and we were unable to recover it. 00:31:09.885 [2024-04-24 05:26:46.922167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.885 [2024-04-24 05:26:46.922294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.885 [2024-04-24 05:26:46.922321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.885 [2024-04-24 05:26:46.922336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.885 [2024-04-24 05:26:46.922349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.885 [2024-04-24 05:26:46.922379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.885 qpair failed and we were unable to recover it. 00:31:09.885 [2024-04-24 05:26:46.932171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.885 [2024-04-24 05:26:46.932312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.885 [2024-04-24 05:26:46.932339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.885 [2024-04-24 05:26:46.932354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.932371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.932400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.942288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.942438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.942465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.942480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.942493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.942522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.952255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.952379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.952405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.952420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.952432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.952460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.962285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.962414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.962440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.962455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.962467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.962495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.972278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.972405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.972430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.972446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.972458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.972486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.982381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.982534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.982561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.982576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.982588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.982617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:46.992399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:46.992561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:46.992589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:46.992608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:46.992621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:46.992658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.002483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.002613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.002647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.002664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.002676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.002705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.012442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.012568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.012595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.012609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.012622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.012659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.022477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.022663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.022690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.022710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.022724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.022752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.032469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.032587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.032613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.032636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.032651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.032679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.042528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.042665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.042691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.042706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.042719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.042747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.052552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.052717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.052743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.052759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.052772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.052800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.062552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.886 [2024-04-24 05:26:47.062679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.886 [2024-04-24 05:26:47.062706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.886 [2024-04-24 05:26:47.062721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.886 [2024-04-24 05:26:47.062733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.886 [2024-04-24 05:26:47.062760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.886 qpair failed and we were unable to recover it. 00:31:09.886 [2024-04-24 05:26:47.072585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.072715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.072742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.072756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.072769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.072797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.082609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.082751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.082777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.082791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.082804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.082832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.092625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.092761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.092787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.092802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.092815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.092843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.102660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.102798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.102825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.102840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.102853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.102881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.112689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.112857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.112884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.112904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.112917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.112945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.122730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.122862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.122889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.122904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.122916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.122944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.132819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.132983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.133009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.133024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.133036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.133079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.142801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.142925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.142951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.142966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.142979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.143007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:09.887 [2024-04-24 05:26:47.152869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.887 [2024-04-24 05:26:47.152988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.887 [2024-04-24 05:26:47.153017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.887 [2024-04-24 05:26:47.153033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.887 [2024-04-24 05:26:47.153045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:09.887 [2024-04-24 05:26:47.153075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.887 qpair failed and we were unable to recover it. 00:31:10.147 [2024-04-24 05:26:47.162916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.147 [2024-04-24 05:26:47.163069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.147 [2024-04-24 05:26:47.163098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.147 [2024-04-24 05:26:47.163113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.147 [2024-04-24 05:26:47.163125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.147 [2024-04-24 05:26:47.163154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.147 qpair failed and we were unable to recover it. 00:31:10.147 [2024-04-24 05:26:47.172935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.147 [2024-04-24 05:26:47.173064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.147 [2024-04-24 05:26:47.173092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.147 [2024-04-24 05:26:47.173106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.147 [2024-04-24 05:26:47.173119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.147 [2024-04-24 05:26:47.173146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.147 qpair failed and we were unable to recover it. 00:31:10.147 [2024-04-24 05:26:47.182910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.147 [2024-04-24 05:26:47.183049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.147 [2024-04-24 05:26:47.183075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.147 [2024-04-24 05:26:47.183090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.147 [2024-04-24 05:26:47.183103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.147 [2024-04-24 05:26:47.183130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.147 qpair failed and we were unable to recover it. 00:31:10.147 [2024-04-24 05:26:47.192918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.147 [2024-04-24 05:26:47.193046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.147 [2024-04-24 05:26:47.193072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.147 [2024-04-24 05:26:47.193087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.147 [2024-04-24 05:26:47.193099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.147 [2024-04-24 05:26:47.193127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.147 qpair failed and we were unable to recover it. 00:31:10.147 [2024-04-24 05:26:47.202953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.147 [2024-04-24 05:26:47.203087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.147 [2024-04-24 05:26:47.203114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.147 [2024-04-24 05:26:47.203134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.203147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.203175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.213084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.213260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.213286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.213316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.213329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.213356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.222998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.223121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.223148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.223163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.223175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.223204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.233032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.233188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.233215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.233230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.233242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.233269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.243092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.243262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.243288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.243304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.243316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.243344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.253072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.253218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.253244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.253259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.253272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.253301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.263099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.263232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.263258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.263273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.263285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.263313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.273223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.273360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.273387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.273402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.273414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.273442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.283194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.283369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.283394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.283410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.283423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.283451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.293182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.293333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.293364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.293380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.293393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.293421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.303272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.303403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.303429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.303444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.303457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.303485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.313264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.313389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.313416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.313431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.313443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.313471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.323323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.323493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.323518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.323533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.323545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.323573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.333344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.333482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.148 [2024-04-24 05:26:47.333508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.148 [2024-04-24 05:26:47.333523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.148 [2024-04-24 05:26:47.333535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.148 [2024-04-24 05:26:47.333568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.148 qpair failed and we were unable to recover it. 00:31:10.148 [2024-04-24 05:26:47.343375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.148 [2024-04-24 05:26:47.343545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.343571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.343586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.343598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.343644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.353509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.353653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.353679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.353694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.353706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.353734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.363455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.363595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.363626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.363651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.363663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.363692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.373434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.373573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.373598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.373613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.373626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.373664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.383475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.383605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.383643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.383661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.383673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.383701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.393505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.393648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.393674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.393689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.393701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.393729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.403520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.403658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.403684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.403698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.403710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.403738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.149 [2024-04-24 05:26:47.413608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.149 [2024-04-24 05:26:47.413772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.149 [2024-04-24 05:26:47.413800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.149 [2024-04-24 05:26:47.413816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.149 [2024-04-24 05:26:47.413829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.149 [2024-04-24 05:26:47.413857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.149 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.423574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.408 [2024-04-24 05:26:47.423715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.408 [2024-04-24 05:26:47.423744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.408 [2024-04-24 05:26:47.423760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.408 [2024-04-24 05:26:47.423773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.408 [2024-04-24 05:26:47.423807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.433645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.408 [2024-04-24 05:26:47.433807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.408 [2024-04-24 05:26:47.433834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.408 [2024-04-24 05:26:47.433850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.408 [2024-04-24 05:26:47.433863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.408 [2024-04-24 05:26:47.433891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.443736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.408 [2024-04-24 05:26:47.443864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.408 [2024-04-24 05:26:47.443891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.408 [2024-04-24 05:26:47.443906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.408 [2024-04-24 05:26:47.443918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.408 [2024-04-24 05:26:47.443947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.453663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.408 [2024-04-24 05:26:47.453800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.408 [2024-04-24 05:26:47.453827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.408 [2024-04-24 05:26:47.453842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.408 [2024-04-24 05:26:47.453855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.408 [2024-04-24 05:26:47.453884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.463691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.408 [2024-04-24 05:26:47.463814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.408 [2024-04-24 05:26:47.463839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.408 [2024-04-24 05:26:47.463853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.408 [2024-04-24 05:26:47.463866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.408 [2024-04-24 05:26:47.463894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-04-24 05:26:47.473720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.473846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.473876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.473891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.473904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.473931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.483754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.483880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.483905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.483920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.483932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.483960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.493787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.493909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.493934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.493949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.493962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.493990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.503791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.503915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.503940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.503954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.503967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.503994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.513832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.513969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.513994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.514008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.514021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.514057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.523865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.524028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.524055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.524069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.524082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.524109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.533885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.534008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.534033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.534048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.534060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.534088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.543954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.544080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.544105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.544120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.544132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.544160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.553933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.554058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.554082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.554096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.554109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.554137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.563977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.564107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.564137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.564153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.564166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.564194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.574004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.574136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.574162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.574176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.574189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.574217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.584018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.584140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.584165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.584179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.584192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.584219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.594045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.594170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.409 [2024-04-24 05:26:47.594197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.409 [2024-04-24 05:26:47.594212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.409 [2024-04-24 05:26:47.594225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.409 [2024-04-24 05:26:47.594252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-04-24 05:26:47.604106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.409 [2024-04-24 05:26:47.604243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.604268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.604283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.604301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.604329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.614161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.614291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.614317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.614332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.614345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.614372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.624226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.624351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.624376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.624390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.624403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.624430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.634155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.634281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.634306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.634321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.634333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.634360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.644235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.644361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.644386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.644401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.644413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.644440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.654306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.654446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.654472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.654490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.654504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.654532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.664305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.664477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.664504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.664518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.664531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.664559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-04-24 05:26:47.674334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.410 [2024-04-24 05:26:47.674465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.410 [2024-04-24 05:26:47.674501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.410 [2024-04-24 05:26:47.674530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.410 [2024-04-24 05:26:47.674548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.410 [2024-04-24 05:26:47.674579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.684437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.684574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.684602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.684617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.684640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.684672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.694367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.694495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.694521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.694536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.694554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.694584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.704358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.704481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.704507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.704521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.704534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.704562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.714442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.714575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.714600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.714615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.714636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.714666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.724472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.724655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.724681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.724695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.724707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.724735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.734483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.734616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.734650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.734667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.734679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.734708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.744529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.744698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.744723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.744738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.744751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.669 [2024-04-24 05:26:47.744778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.669 qpair failed and we were unable to recover it. 00:31:10.669 [2024-04-24 05:26:47.754530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.669 [2024-04-24 05:26:47.754669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.669 [2024-04-24 05:26:47.754695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.669 [2024-04-24 05:26:47.754709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.669 [2024-04-24 05:26:47.754723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.754750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.764588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.764752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.764779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.764797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.764810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.764839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.774708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.774853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.774880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.774895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.774923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.774950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.784659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.784779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.784804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.784824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.784838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.784866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.794647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.794770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.794795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.794810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.794823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.794851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.804693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.804883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.804910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.804924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.804941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.804972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.814695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.814822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.814847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.814863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.814876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.814903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.824728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.824857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.824884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.824899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.824911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.824939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.834777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.834908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.834934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.834949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.834962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.834989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.844797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.844929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.844955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.844970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.844983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.845010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.854830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.854960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.854985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.854999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.855012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.855040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.864837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.864957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.864982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.864996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.865009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.865037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.874849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.874969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.874994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.670 [2024-04-24 05:26:47.875014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.670 [2024-04-24 05:26:47.875027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.670 [2024-04-24 05:26:47.875054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.670 qpair failed and we were unable to recover it. 00:31:10.670 [2024-04-24 05:26:47.884933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.670 [2024-04-24 05:26:47.885106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.670 [2024-04-24 05:26:47.885131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.885145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.885158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.885185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.671 [2024-04-24 05:26:47.894943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.671 [2024-04-24 05:26:47.895083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.671 [2024-04-24 05:26:47.895109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.895124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.895137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.895164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.671 [2024-04-24 05:26:47.904953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.671 [2024-04-24 05:26:47.905077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.671 [2024-04-24 05:26:47.905102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.905117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.905130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.905157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.671 [2024-04-24 05:26:47.915079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.671 [2024-04-24 05:26:47.915204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.671 [2024-04-24 05:26:47.915229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.915244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.915257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.915284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.671 [2024-04-24 05:26:47.925048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.671 [2024-04-24 05:26:47.925171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.671 [2024-04-24 05:26:47.925197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.925212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.925225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.925252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.671 [2024-04-24 05:26:47.935081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.671 [2024-04-24 05:26:47.935236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.671 [2024-04-24 05:26:47.935273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.671 [2024-04-24 05:26:47.935299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.671 [2024-04-24 05:26:47.935313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.671 [2024-04-24 05:26:47.935344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.671 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.945091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.945216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.945243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.945258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.945271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.945301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.955081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.955203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.955229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.955243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.955256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.955284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.965159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.965286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.965312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.965331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.965344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.965373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.975190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.975324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.975350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.975365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.975378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.975406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.985217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.985402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.985428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.985443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.985460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.985490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:47.995240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:47.995380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:47.995406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:47.995421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:47.995434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:47.995462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.005268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.005414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.005440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.005454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.005467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.005495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.015334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.015481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.015507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.015521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.015534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.015562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.025301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.025423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.025448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.025462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.025475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.025502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.035321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.035444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.035470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.035485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.035498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.035526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.045394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.045528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.045553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.045568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.045580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.045608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.930 [2024-04-24 05:26:48.055413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.930 [2024-04-24 05:26:48.055581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.930 [2024-04-24 05:26:48.055611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.930 [2024-04-24 05:26:48.055638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.930 [2024-04-24 05:26:48.055654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.930 [2024-04-24 05:26:48.055683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.930 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.065420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.065539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.065565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.065579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.065592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.065620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.075475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.075602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.075635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.075653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.075677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.075704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.085487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.085624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.085656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.085671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.085684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.085712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.095577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.095716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.095742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.095756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.095769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.095799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.105527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.105653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.105679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.105694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.105706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.105735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.115577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.115701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.115726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.115741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.115753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.115781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.125617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.125755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.125780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.125795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.125807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.125835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.135658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.135783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.135809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.135824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.135837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.135865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.145687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.145811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.145843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.145860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.145873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.145902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.155676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.155793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.155818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.155833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.155847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.155875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.165723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.165847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.165872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.165886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.165900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.165927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.175738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.175863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.175887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.175901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.175914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.175942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.185861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.931 [2024-04-24 05:26:48.185990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.931 [2024-04-24 05:26:48.186015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.931 [2024-04-24 05:26:48.186030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.931 [2024-04-24 05:26:48.186043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.931 [2024-04-24 05:26:48.186075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.931 qpair failed and we were unable to recover it. 00:31:10.931 [2024-04-24 05:26:48.195817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.932 [2024-04-24 05:26:48.195951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.932 [2024-04-24 05:26:48.195987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.932 [2024-04-24 05:26:48.196011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.932 [2024-04-24 05:26:48.196035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:10.932 [2024-04-24 05:26:48.196080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.932 qpair failed and we were unable to recover it. 00:31:11.190 [2024-04-24 05:26:48.205840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.205974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.206004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.206020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.206033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.206062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.215901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.216073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.216101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.216116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.216144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.216172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.225900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.226040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.226070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.226086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.226100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.226129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.236016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.236147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.236180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.236196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.236209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.236237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.245952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.246083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.246109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.246124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.246137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.246165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.255959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.256094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.256119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.256134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.256146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.256174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.266048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.266172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.266200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.266215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.266227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.266255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.276010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.276139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.276165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.276181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.276194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.276230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.286070] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.286199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.286224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.286239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.286253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.286280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.296112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.296238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.296265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.296279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.296292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.296321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.306142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.306279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.306306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.306321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.306333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.306360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.316154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.316280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.316306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.316321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.316334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.316361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.326182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.326312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.326344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.191 [2024-04-24 05:26:48.326359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.191 [2024-04-24 05:26:48.326372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.191 [2024-04-24 05:26:48.326400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-04-24 05:26:48.336200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.191 [2024-04-24 05:26:48.336321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.191 [2024-04-24 05:26:48.336347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.336362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.336375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.336402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.346219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.346386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.346414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.346429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.346441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.346469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.356302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.356453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.356479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.356495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.356507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.356535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.366294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.366432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.366458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.366473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.366491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.366520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.376350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.376480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.376507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.376523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.376536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.376564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.386370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.386499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.386524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.386538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.386551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.386578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.396373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.396496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.396522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.396536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.396549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.396576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.406446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.406592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.406618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.406645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.406668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.406697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.416429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.416583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.416610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.416625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.416649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.416678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.426492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.426661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.426693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.426708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.426721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.426749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.436504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.436642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.436669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.436684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.436696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.436724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.446545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.446686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.446712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.446727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.446740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.446768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-04-24 05:26:48.456586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.192 [2024-04-24 05:26:48.456837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.192 [2024-04-24 05:26:48.456874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.192 [2024-04-24 05:26:48.456902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.192 [2024-04-24 05:26:48.456940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.192 [2024-04-24 05:26:48.456974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.466598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.451 [2024-04-24 05:26:48.466753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.451 [2024-04-24 05:26:48.466784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.451 [2024-04-24 05:26:48.466803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.451 [2024-04-24 05:26:48.466817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.451 [2024-04-24 05:26:48.466847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.476616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.451 [2024-04-24 05:26:48.476781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.451 [2024-04-24 05:26:48.476808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.451 [2024-04-24 05:26:48.476823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.451 [2024-04-24 05:26:48.476836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.451 [2024-04-24 05:26:48.476864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.486653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.451 [2024-04-24 05:26:48.486819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.451 [2024-04-24 05:26:48.486846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.451 [2024-04-24 05:26:48.486860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.451 [2024-04-24 05:26:48.486873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.451 [2024-04-24 05:26:48.486901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.496717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.451 [2024-04-24 05:26:48.496848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.451 [2024-04-24 05:26:48.496874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.451 [2024-04-24 05:26:48.496889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.451 [2024-04-24 05:26:48.496901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.451 [2024-04-24 05:26:48.496934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.506676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.451 [2024-04-24 05:26:48.506801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.451 [2024-04-24 05:26:48.506827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.451 [2024-04-24 05:26:48.506842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.451 [2024-04-24 05:26:48.506855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.451 [2024-04-24 05:26:48.506882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-04-24 05:26:48.516750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.516910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.516936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.516950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.516963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.516992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.526756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.526910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.526936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.526951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.526963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.526991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.536783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.536917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.536944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.536959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.536972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.537014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.546814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.546944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.546970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.546984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.547002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.547031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.556870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.557026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.557052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.557067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.557079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.557108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.566909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.567043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.567069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.567084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.567096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.567124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.576882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.577017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.577043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.577059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.577071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.577099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.586891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.587048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.587074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.587089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.587101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.587128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.596921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.597053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.597079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.597094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.597106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.597135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.607002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.607176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.607201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.607216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.607228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.607256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.616997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.617127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.617153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.617168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.617181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.617210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.627045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.627183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.627209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.627224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.627236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.627263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.452 [2024-04-24 05:26:48.637140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.452 [2024-04-24 05:26:48.637264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.452 [2024-04-24 05:26:48.637290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.452 [2024-04-24 05:26:48.637310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.452 [2024-04-24 05:26:48.637324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.452 [2024-04-24 05:26:48.637351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.452 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.647212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.647393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.647418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.647433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.647445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.647473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.657104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.657245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.657270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.657285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.657297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.657327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.667130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.667267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.667294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.667309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.667321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.667349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.677286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.677424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.677450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.677465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.677477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.677504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.687253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.687426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.687451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.687466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.687478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.687506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.697275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.697444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.697471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.697485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.697498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.697525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.707306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.707450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.707476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.707491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.707503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.707530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.453 [2024-04-24 05:26:48.717285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.453 [2024-04-24 05:26:48.717490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.453 [2024-04-24 05:26:48.717526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.453 [2024-04-24 05:26:48.717552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.453 [2024-04-24 05:26:48.717577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.453 [2024-04-24 05:26:48.717620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.453 qpair failed and we were unable to recover it. 00:31:11.712 [2024-04-24 05:26:48.727363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.712 [2024-04-24 05:26:48.727524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.712 [2024-04-24 05:26:48.727553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.712 [2024-04-24 05:26:48.727574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.712 [2024-04-24 05:26:48.727588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.712 [2024-04-24 05:26:48.727617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.712 qpair failed and we were unable to recover it. 00:31:11.712 [2024-04-24 05:26:48.737355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.712 [2024-04-24 05:26:48.737491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.712 [2024-04-24 05:26:48.737518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.712 [2024-04-24 05:26:48.737533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.712 [2024-04-24 05:26:48.737546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.712 [2024-04-24 05:26:48.737574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.712 qpair failed and we were unable to recover it. 00:31:11.712 [2024-04-24 05:26:48.747390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.712 [2024-04-24 05:26:48.747523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.747550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.747564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.747577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.747605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.757379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.757508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.757534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.757549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.757562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.757589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.767431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.767568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.767594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.767609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.767622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.767659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.777459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.777591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.777617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.777640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.777655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.777683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.787478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.787610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.787644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.787661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.787673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.787702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.797521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.797652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.797679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.797694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.797706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.797734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.807569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.807709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.807736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.807751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.807763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.807791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.817560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.817688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.817720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.817736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.817748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.817777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.827698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.827834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.827860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.827875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.827887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.827915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.837739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.837897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.837923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.837938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.837951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.837978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.847691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.847827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.847853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.847868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.847880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.847909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.857696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.857832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.857858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.857873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.857886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.857913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.867753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.867920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.867947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.867962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.867974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.868002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.713 qpair failed and we were unable to recover it. 00:31:11.713 [2024-04-24 05:26:48.877783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.713 [2024-04-24 05:26:48.877913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.713 [2024-04-24 05:26:48.877938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.713 [2024-04-24 05:26:48.877953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.713 [2024-04-24 05:26:48.877966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.713 [2024-04-24 05:26:48.877993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.887806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.887942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.887966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.887980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.887993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.888020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.897837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.898010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.898036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.898051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.898064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.898107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.908003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.908129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.908160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.908176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.908189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.908217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.917995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.918129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.918157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.918176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.918189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.918217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.927937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.928068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.928094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.928110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.928122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.928149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.937932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.938058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.938084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.938099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.938112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.938139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.947976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.948116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.948142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.948157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.948169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.948201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.957981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.958108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.958134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.958148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.958161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.958188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.968032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.968166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.968191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.968206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.968219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.968246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.714 [2024-04-24 05:26:48.978081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.714 [2024-04-24 05:26:48.978241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.714 [2024-04-24 05:26:48.978270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.714 [2024-04-24 05:26:48.978285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.714 [2024-04-24 05:26:48.978298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.714 [2024-04-24 05:26:48.978330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.714 qpair failed and we were unable to recover it. 00:31:11.973 [2024-04-24 05:26:48.988087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.973 [2024-04-24 05:26:48.988218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.973 [2024-04-24 05:26:48.988248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.973 [2024-04-24 05:26:48.988264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.973 [2024-04-24 05:26:48.988277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.973 [2024-04-24 05:26:48.988307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-04-24 05:26:48.998095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.973 [2024-04-24 05:26:48.998223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.973 [2024-04-24 05:26:48.998255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.973 [2024-04-24 05:26:48.998271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.973 [2024-04-24 05:26:48.998284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.973 [2024-04-24 05:26:48.998312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-04-24 05:26:49.008198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.973 [2024-04-24 05:26:49.008351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.973 [2024-04-24 05:26:49.008380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.973 [2024-04-24 05:26:49.008397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.973 [2024-04-24 05:26:49.008409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.973 [2024-04-24 05:26:49.008439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-04-24 05:26:49.018165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.973 [2024-04-24 05:26:49.018300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.973 [2024-04-24 05:26:49.018326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.973 [2024-04-24 05:26:49.018341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.018354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.018383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.028186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.028317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.028344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.028359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.028371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.028399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.038274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.038442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.038468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.038483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.038495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.038530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.048283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.048423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.048450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.048465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.048477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.048505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.058292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.058450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.058477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.058492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.058504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.058548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.068328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.068455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.068481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.068496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.068508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.068536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.078363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.078496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.078522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.078537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.078550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.078577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.088395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.088559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.088590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.088606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.088618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.088658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.098438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.098573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.098600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.098616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.098636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.098669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.108421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.108562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.108589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.108604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.108617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.108653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.118443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.118575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.118601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.118616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.118635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.118666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.128542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.128684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.128710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.128725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.128742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.974 [2024-04-24 05:26:49.128771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-04-24 05:26:49.138529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.974 [2024-04-24 05:26:49.138692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.974 [2024-04-24 05:26:49.138719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.974 [2024-04-24 05:26:49.138734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.974 [2024-04-24 05:26:49.138746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.138775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.148541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.148685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.148712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.148727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.148739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.148767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.158558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.158686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.158712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.158727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.158739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.158767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.168591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.168737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.168763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.168778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.168790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.168818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.178656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.178829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.178855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.178870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.178883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.178911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.188666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.188807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.188832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.188847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.188860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.188887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.198679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.198807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.198832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.198847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.198859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.198887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.208719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.208857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.208883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.208898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.208910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.208937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.218738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.218876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.218902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.218917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.218935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.218978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.228763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.228906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.228933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.228948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.228960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.228987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-04-24 05:26:49.238807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.975 [2024-04-24 05:26:49.238946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.975 [2024-04-24 05:26:49.238975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.975 [2024-04-24 05:26:49.238999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.975 [2024-04-24 05:26:49.239015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:11.975 [2024-04-24 05:26:49.239044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.975 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.248862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.249001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.249030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.249050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.249063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.249093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.258864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.258999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.259026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.259041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.259053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.259082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.268898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.269038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.269065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.269080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.269092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.269120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.278936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.279066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.279093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.279108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.279121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.279149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.288955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.289100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.289125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.289140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.289153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.289181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.299056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.299210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.299235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.299250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.299262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.299290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.308989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.309132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.309159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.309174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.309192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.309220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.319036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.319179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.319205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.319220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.319232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.319260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.329064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.329247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.329272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.329287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.329299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.329327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.339102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.339236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.339262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.339277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.339290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.339332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.349101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.349236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.349262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.349278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.349290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.349317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.359131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.359251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.359278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.359293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.359307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.359334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.369218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.235 [2024-04-24 05:26:49.369356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.235 [2024-04-24 05:26:49.369383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.235 [2024-04-24 05:26:49.369398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.235 [2024-04-24 05:26:49.369411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.235 [2024-04-24 05:26:49.369439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.235 qpair failed and we were unable to recover it. 00:31:12.235 [2024-04-24 05:26:49.379201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.379340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.379366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.379380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.379393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.379420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.389239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.389368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.389394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.389408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.389421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.389449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.399304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.399460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.399487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.399508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.399520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.399549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.409312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.409442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.409469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.409484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.409496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.409524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.419338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.419457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.419482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.419497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.419509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.419539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.429364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.429486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.429512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.429527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.429539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.429567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.439385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.439522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.439548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.439563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.439576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.439604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.449424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.449553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.449578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.449593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.449606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.449641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.459437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.459555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.459580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.459594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.459607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.459643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.469499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.469662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.469688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.469703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.469716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.469745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.479524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.479653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.479680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.479695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.479707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.479735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.489571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.489725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.489752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.489773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.489787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.489815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.236 [2024-04-24 05:26:49.499556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.236 [2024-04-24 05:26:49.499689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.236 [2024-04-24 05:26:49.499716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.236 [2024-04-24 05:26:49.499731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.236 [2024-04-24 05:26:49.499743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.236 [2024-04-24 05:26:49.499772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.236 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.509603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.509769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.509803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.509824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.509840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.509876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.519645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.519809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.519838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.519854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.519867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.519896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.529760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.529934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.529959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.529974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.529986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.530014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.539693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.539824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.539850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.539865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.539877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.539905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.549712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.549836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.549862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.549876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.549889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.549916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.559769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.559893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.559920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.559935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.559947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.559975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.569813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.569945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.569971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.569986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.569999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.570027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.579905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.580048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.580075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.580095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.580109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.580138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.589844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.589976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.590003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.590018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.590031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.590059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.599851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.599976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.600000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.600015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.600028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.600055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.609909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.610040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.610065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.610079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.610092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.610119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.619925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.620054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.620079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.620094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.620107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.503 [2024-04-24 05:26:49.620135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.503 qpair failed and we were unable to recover it. 00:31:12.503 [2024-04-24 05:26:49.629939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.503 [2024-04-24 05:26:49.630073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.503 [2024-04-24 05:26:49.630099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.503 [2024-04-24 05:26:49.630114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.503 [2024-04-24 05:26:49.630126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.630154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.640041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.640166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.640192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.640206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.640218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.640245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.649986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.650122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.650147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.650161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.650173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.650200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.660010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.660136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.660161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.660175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.660188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.660216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.670026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.670150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.670180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.670196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.670209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.670236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.680068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.680191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.680216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.680231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.680243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.680270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.690129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.690269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.690294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.690309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.690322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.690349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.700172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.700305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.700331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.700346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.700358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.700386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.710163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.710281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.710307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.710322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.710334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.710368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.720205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.720377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.720402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.720416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.720429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.720456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.730240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.730368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.730393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.730407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.730420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.730448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.740259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.740390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.740416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.740431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.740444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.740472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.750299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.750435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.750460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.750474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.750488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.750515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.760379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.504 [2024-04-24 05:26:49.760505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.504 [2024-04-24 05:26:49.760535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.504 [2024-04-24 05:26:49.760550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.504 [2024-04-24 05:26:49.760562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.504 [2024-04-24 05:26:49.760590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.504 qpair failed and we were unable to recover it. 00:31:12.504 [2024-04-24 05:26:49.770354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.505 [2024-04-24 05:26:49.770481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.505 [2024-04-24 05:26:49.770506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.505 [2024-04-24 05:26:49.770520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.505 [2024-04-24 05:26:49.770533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.505 [2024-04-24 05:26:49.770560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.505 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.780377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.780509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.780535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.780549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.780562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.780591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.790373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.790514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.790539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.790553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.790566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.790593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.800447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.800592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.800617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.800637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.800651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.800687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.810476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.810637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.810663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.810677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.810690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.810719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.820517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.820652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.820678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.820692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.820706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.820736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.830533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.830677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.830704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.830718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.766 [2024-04-24 05:26:49.830734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.766 [2024-04-24 05:26:49.830764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.766 qpair failed and we were unable to recover it. 00:31:12.766 [2024-04-24 05:26:49.840657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.766 [2024-04-24 05:26:49.840812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.766 [2024-04-24 05:26:49.840838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.766 [2024-04-24 05:26:49.840852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.840865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.840892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.850644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.850789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.850820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.850834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.850847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.850875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.860591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.860730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.860755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.860770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.860782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.860810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.870662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.870817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.870842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.870857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.870870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.870897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.880648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.880770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.880795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.880809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.880822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.880850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.890697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.890875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.890900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.890915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.890928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.890962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.900727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.900861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.900887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.900902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.900914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.900942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.910753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.910881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.910906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.910920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.910933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.910960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.920769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.920891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.920916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.920931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.920944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.920971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.930808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.930934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.930959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.930974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.930987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.931014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.940879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.941019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.941049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.941065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.941078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.941121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.950929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.951062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.951087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.951102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.951114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.951142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.960899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.961018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.961043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.961057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.961070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.961097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.970971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.767 [2024-04-24 05:26:49.971098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.767 [2024-04-24 05:26:49.971124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.767 [2024-04-24 05:26:49.971138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.767 [2024-04-24 05:26:49.971151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.767 [2024-04-24 05:26:49.971180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.767 qpair failed and we were unable to recover it. 00:31:12.767 [2024-04-24 05:26:49.980966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:49.981102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:49.981128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:49.981143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:49.981161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:49.981189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:12.768 [2024-04-24 05:26:49.990964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:49.991102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:49.991128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:49.991142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:49.991155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:49.991182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:12.768 [2024-04-24 05:26:50.001018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:50.001142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:50.001167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:50.001182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:50.001194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:50.001222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:12.768 [2024-04-24 05:26:50.011071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:50.011233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:50.011259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:50.011273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:50.011287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:50.011315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:12.768 [2024-04-24 05:26:50.021065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:50.021187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:50.021212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:50.021226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:50.021239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:50.021267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:12.768 [2024-04-24 05:26:50.031106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.768 [2024-04-24 05:26:50.031253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.768 [2024-04-24 05:26:50.031280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.768 [2024-04-24 05:26:50.031295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.768 [2024-04-24 05:26:50.031308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:12.768 [2024-04-24 05:26:50.031337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.768 qpair failed and we were unable to recover it. 00:31:13.029 [2024-04-24 05:26:50.041140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.029 [2024-04-24 05:26:50.041276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.029 [2024-04-24 05:26:50.041303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.029 [2024-04-24 05:26:50.041317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.029 [2024-04-24 05:26:50.041329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.029 [2024-04-24 05:26:50.041357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.029 qpair failed and we were unable to recover it. 00:31:13.029 [2024-04-24 05:26:50.051213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.029 [2024-04-24 05:26:50.051346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.029 [2024-04-24 05:26:50.051371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.029 [2024-04-24 05:26:50.051385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.029 [2024-04-24 05:26:50.051397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.029 [2024-04-24 05:26:50.051425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.029 qpair failed and we were unable to recover it. 00:31:13.029 [2024-04-24 05:26:50.061312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.029 [2024-04-24 05:26:50.061442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.029 [2024-04-24 05:26:50.061483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.029 [2024-04-24 05:26:50.061498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.029 [2024-04-24 05:26:50.061511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.029 [2024-04-24 05:26:50.061539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.029 qpair failed and we were unable to recover it. 00:31:13.029 [2024-04-24 05:26:50.071249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.029 [2024-04-24 05:26:50.071380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.029 [2024-04-24 05:26:50.071405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.029 [2024-04-24 05:26:50.071420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.029 [2024-04-24 05:26:50.071439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.029 [2024-04-24 05:26:50.071467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.029 qpair failed and we were unable to recover it. 00:31:13.029 [2024-04-24 05:26:50.081438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.029 [2024-04-24 05:26:50.081580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.029 [2024-04-24 05:26:50.081607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.029 [2024-04-24 05:26:50.081622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.029 [2024-04-24 05:26:50.081645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.029 [2024-04-24 05:26:50.081674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.091425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.091602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.091638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.091656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.091673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.091702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.101341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.101468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.101493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.101507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.101520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.101548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.111342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.111466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.111492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.111510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.111524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.111552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.121395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.121569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.121594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.121608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.121621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.121658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.131414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.131593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.131618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.131640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.131654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.131682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.141403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.141523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.141549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.141563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.141575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.141603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.151453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.151597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.151623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.151646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.151660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.151687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.161480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.161601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.161626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.161657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.161671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.161699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.171508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.171650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.171676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.171690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.171703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.171731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.181617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.181764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.181789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.181804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.181817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.181845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.191583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.191720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.191745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.191759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.191771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.191800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.201592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.201737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.201763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.201777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.201790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.201818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.211642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.030 [2024-04-24 05:26:50.211807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.030 [2024-04-24 05:26:50.211832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.030 [2024-04-24 05:26:50.211846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.030 [2024-04-24 05:26:50.211859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.030 [2024-04-24 05:26:50.211887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.030 qpair failed and we were unable to recover it. 00:31:13.030 [2024-04-24 05:26:50.221634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.221768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.221793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.221807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.221820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.221848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.231704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.231842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.231867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.231882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.231894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.231922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.241728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.241851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.241877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.241892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.241905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.241933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.251758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.251894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.251919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.251939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.251952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.251980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.261742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.261866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.261891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.261906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.261919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.261947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.271770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.271891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.271917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.271931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.271944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.271972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.281836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.281966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.281991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.282006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.282018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.282046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.031 [2024-04-24 05:26:50.291845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.031 [2024-04-24 05:26:50.291970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.031 [2024-04-24 05:26:50.291995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.031 [2024-04-24 05:26:50.292009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.031 [2024-04-24 05:26:50.292022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.031 [2024-04-24 05:26:50.292050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.031 qpair failed and we were unable to recover it. 00:31:13.291 [2024-04-24 05:26:50.301913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.291 [2024-04-24 05:26:50.302047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.291 [2024-04-24 05:26:50.302073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.291 [2024-04-24 05:26:50.302088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.291 [2024-04-24 05:26:50.302101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.291 [2024-04-24 05:26:50.302128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-04-24 05:26:50.311938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.291 [2024-04-24 05:26:50.312070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.291 [2024-04-24 05:26:50.312095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.291 [2024-04-24 05:26:50.312109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.291 [2024-04-24 05:26:50.312122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.291 [2024-04-24 05:26:50.312149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-04-24 05:26:50.321924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.291 [2024-04-24 05:26:50.322053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.291 [2024-04-24 05:26:50.322078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.291 [2024-04-24 05:26:50.322092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.291 [2024-04-24 05:26:50.322105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.291 [2024-04-24 05:26:50.322132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-04-24 05:26:50.331958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.291 [2024-04-24 05:26:50.332109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.291 [2024-04-24 05:26:50.332133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.291 [2024-04-24 05:26:50.332148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.291 [2024-04-24 05:26:50.332161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.291 [2024-04-24 05:26:50.332189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-04-24 05:26:50.341975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.342108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.342134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.342154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.342167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.342195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.352014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.352153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.352178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.352193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.352206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.352236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.362132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.362264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.362290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.362305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.362318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.362346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.372110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.372238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.372265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.372280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.372292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.372320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.382120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.382257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.382285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.382303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.382316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.382359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.392185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.392316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.392343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.392358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.392371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.392401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.402181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.402324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.402350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.402365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.402378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.402405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.412205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.412333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.412359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.412374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.412387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.412415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.422203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.422329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.422356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.422371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.422384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.422412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.432218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.432341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.432372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.432388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.432401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.432428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.442259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.442380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.442406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.442421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.442433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.442461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.452300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.452437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.452463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.452478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.452490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.452518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.462296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.462422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.462449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.462464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.462476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.292 [2024-04-24 05:26:50.462504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-04-24 05:26:50.472346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.292 [2024-04-24 05:26:50.472473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.292 [2024-04-24 05:26:50.472498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.292 [2024-04-24 05:26:50.472513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.292 [2024-04-24 05:26:50.472525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.472552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.482385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.482516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.482542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.482557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.482569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.482596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.492449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.492622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.492657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.492672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.492684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.492712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.502415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.502577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.502604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.502619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.502642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.502674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.512445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.512570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.512596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.512611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.512623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.512664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.522491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.522688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.522721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.522739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.522754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.522784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.532518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.532657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.532684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.532698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.532711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.532739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.542551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.542689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.542716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.542732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.542744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.542774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-04-24 05:26:50.552575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.293 [2024-04-24 05:26:50.552712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.293 [2024-04-24 05:26:50.552739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.293 [2024-04-24 05:26:50.552754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.293 [2024-04-24 05:26:50.552766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.293 [2024-04-24 05:26:50.552794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.562602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.562738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.562765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.562779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.562792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.562828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.572659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.572798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.572824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.572839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.572852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.572879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.582648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.582778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.582805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.582820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.582832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.582861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.592674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.592795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.592821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.592836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.592849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.592877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.602794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.602932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.602957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.602971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.602983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.603010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.612785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.612917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.612949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.612964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.612976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.553 [2024-04-24 05:26:50.613004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.553 qpair failed and we were unable to recover it. 00:31:13.553 [2024-04-24 05:26:50.622855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.553 [2024-04-24 05:26:50.622988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.553 [2024-04-24 05:26:50.623030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.553 [2024-04-24 05:26:50.623045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.553 [2024-04-24 05:26:50.623057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.623099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.632842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.632986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.633012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.633027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.633039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.633067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.642842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.642973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.643000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.643020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.643033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.643062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.652869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.653006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.653033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.653048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.653061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.653095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.662962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.663095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.663121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.663137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.663149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.663177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.672947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.673076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.673103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.673118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.673130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.673157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.682917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.683044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.683070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.683085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.683097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.683125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.693010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.693150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.693176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.693191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.693203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.693230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.703074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.703214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.703246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.703261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.703273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.703315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.713014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.713144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.713170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.713185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.713198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.713225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.723030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.723169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.723196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.723211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.723223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.723251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.733073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.733247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.733274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.733289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.733302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.733329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.743112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.743277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.743302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.743318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.743351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.743381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.753172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.753335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.554 [2024-04-24 05:26:50.753362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.554 [2024-04-24 05:26:50.753376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.554 [2024-04-24 05:26:50.753389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.554 [2024-04-24 05:26:50.753417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.554 qpair failed and we were unable to recover it. 00:31:13.554 [2024-04-24 05:26:50.763207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.554 [2024-04-24 05:26:50.763337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.763364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.763378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.763390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.763418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.555 [2024-04-24 05:26:50.773211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.555 [2024-04-24 05:26:50.773382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.773408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.773423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.773435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.773462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.555 [2024-04-24 05:26:50.783212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.555 [2024-04-24 05:26:50.783342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.783368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.783383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.783396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.783424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.555 [2024-04-24 05:26:50.793239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.555 [2024-04-24 05:26:50.793412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.793439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.793454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.793466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.793493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.555 [2024-04-24 05:26:50.803274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.555 [2024-04-24 05:26:50.803406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.803432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.803447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.803459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.803487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.555 [2024-04-24 05:26:50.813347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.555 [2024-04-24 05:26:50.813496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.555 [2024-04-24 05:26:50.813522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.555 [2024-04-24 05:26:50.813537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.555 [2024-04-24 05:26:50.813549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.555 [2024-04-24 05:26:50.813577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.555 qpair failed and we were unable to recover it. 00:31:13.814 [2024-04-24 05:26:50.823389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.814 [2024-04-24 05:26:50.823548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.814 [2024-04-24 05:26:50.823574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.814 [2024-04-24 05:26:50.823589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.814 [2024-04-24 05:26:50.823601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.814 [2024-04-24 05:26:50.823637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.814 qpair failed and we were unable to recover it. 00:31:13.814 [2024-04-24 05:26:50.833344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.833474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.833500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.833515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.833533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.833569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.843367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.843498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.843524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.843539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.843552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.843580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.853402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.853534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.853560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.853575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.853587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.853616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.863522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.863666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.863693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.863708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.863721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.863749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.873449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.873578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.873604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.873619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.873641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.873670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.883499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.883642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.883668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.883683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.883696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.883723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.893561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.893728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.893752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.893766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.893779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.893806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.903540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.903690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.903717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.903731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.903744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.903772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.913568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.913702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.913728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.913742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.913755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.913783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.923688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.923824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.923849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.923864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.923882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.923911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.933638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.933775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.933800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.933815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.933827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.933855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.943642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.943775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.943800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.943815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.943827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.943855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.953714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.953855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.953881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.953895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.953907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.815 [2024-04-24 05:26:50.953935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.815 qpair failed and we were unable to recover it. 00:31:13.815 [2024-04-24 05:26:50.963718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.815 [2024-04-24 05:26:50.963851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.815 [2024-04-24 05:26:50.963876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.815 [2024-04-24 05:26:50.963892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.815 [2024-04-24 05:26:50.963904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:50.963932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:50.973765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:50.973899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:50.973925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:50.973939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:50.973951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:50.973980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:50.983795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:50.983974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:50.984000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:50.984016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:50.984042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:50.984069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:50.993880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:50.994012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:50.994038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:50.994053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:50.994065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:50.994092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.003812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.003938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.003963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.003977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.003990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:51.004017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.013848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.013986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.014012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.014032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.014045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:51.014073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.023868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.024002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.024028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.024043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.024055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ebe40 00:31:13.816 [2024-04-24 05:26:51.024084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.033922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.034056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.034089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.034106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.034118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d5c000b90 00:31:13.816 [2024-04-24 05:26:51.034149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.043981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.044130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.044158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.044173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.044186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d5c000b90 00:31:13.816 [2024-04-24 05:26:51.044216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.053976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.054112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.054144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.054161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.054174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d64000b90 00:31:13.816 [2024-04-24 05:26:51.054204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.064026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.064180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.064207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.064223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.064235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d64000b90 00:31:13.816 [2024-04-24 05:26:51.064264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.064382] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:13.816 A controller has encountered a failure and is being reset. 00:31:13.816 [2024-04-24 05:26:51.074036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.816 [2024-04-24 05:26:51.074169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.816 [2024-04-24 05:26:51.074201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.816 [2024-04-24 05:26:51.074218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.816 [2024-04-24 05:26:51.074231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d54000b90 00:31:13.816 [2024-04-24 05:26:51.074263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:13.816 qpair failed and we were unable to recover it. 00:31:13.816 [2024-04-24 05:26:51.084124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.075 [2024-04-24 05:26:51.084266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.075 [2024-04-24 05:26:51.084295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.076 [2024-04-24 05:26:51.084313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.076 [2024-04-24 05:26:51.084326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6d54000b90 00:31:14.076 [2024-04-24 05:26:51.084356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 Controller properly reset. 00:31:14.076 Initializing NVMe Controllers 00:31:14.076 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:14.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:14.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:14.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:14.076 Initialization complete. Launching workers. 00:31:14.076 Starting thread on core 1 00:31:14.076 Starting thread on core 2 00:31:14.076 Starting thread on core 3 00:31:14.076 Starting thread on core 0 00:31:14.076 05:26:51 -- host/target_disconnect.sh@59 -- # sync 00:31:14.076 00:31:14.076 real 0m10.736s 00:31:14.076 user 0m17.950s 00:31:14.076 sys 0m5.485s 00:31:14.076 05:26:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:14.076 05:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:14.076 ************************************ 00:31:14.076 END TEST nvmf_target_disconnect_tc2 00:31:14.076 ************************************ 00:31:14.076 05:26:51 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:14.076 05:26:51 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:14.076 05:26:51 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:14.076 05:26:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:14.076 05:26:51 -- nvmf/common.sh@117 -- # sync 00:31:14.076 05:26:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.076 05:26:51 -- nvmf/common.sh@120 -- # set +e 00:31:14.076 05:26:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.076 05:26:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.076 rmmod nvme_tcp 00:31:14.076 rmmod nvme_fabrics 00:31:14.076 rmmod nvme_keyring 00:31:14.076 05:26:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.076 05:26:51 -- nvmf/common.sh@124 -- # set -e 00:31:14.076 05:26:51 -- nvmf/common.sh@125 -- # return 0 00:31:14.076 05:26:51 -- nvmf/common.sh@478 -- # '[' -n 2021767 ']' 00:31:14.076 05:26:51 -- nvmf/common.sh@479 -- # killprocess 2021767 00:31:14.076 05:26:51 -- common/autotest_common.sh@936 -- # '[' -z 2021767 ']' 00:31:14.076 05:26:51 -- common/autotest_common.sh@940 -- # kill -0 2021767 00:31:14.076 05:26:51 -- common/autotest_common.sh@941 -- # uname 00:31:14.076 05:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:14.076 05:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2021767 00:31:14.076 05:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:31:14.076 05:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:31:14.076 05:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2021767' 00:31:14.076 killing process with pid 2021767 00:31:14.076 05:26:51 -- common/autotest_common.sh@955 -- # kill 2021767 00:31:14.076 05:26:51 -- common/autotest_common.sh@960 -- # wait 2021767 00:31:14.335 05:26:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:14.335 05:26:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:14.335 05:26:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:14.335 05:26:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:14.335 05:26:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:14.335 05:26:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.335 05:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.335 05:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.867 05:26:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:16.867 00:31:16.867 real 0m15.621s 00:31:16.867 user 0m43.950s 00:31:16.867 sys 0m7.537s 00:31:16.867 05:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:16.867 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.867 ************************************ 00:31:16.867 END TEST nvmf_target_disconnect 00:31:16.867 ************************************ 00:31:16.867 05:26:53 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:31:16.867 05:26:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:16.867 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.867 05:26:53 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:31:16.867 00:31:16.867 real 23m3.094s 00:31:16.867 user 63m28.595s 00:31:16.867 sys 5m40.963s 00:31:16.868 05:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:16.868 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 ************************************ 00:31:16.868 END TEST nvmf_tcp 00:31:16.868 ************************************ 00:31:16.868 05:26:53 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:31:16.868 05:26:53 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:16.868 05:26:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:16.868 05:26:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:16.868 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 ************************************ 00:31:16.868 START TEST spdkcli_nvmf_tcp 00:31:16.868 ************************************ 00:31:16.868 05:26:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:16.868 * Looking for test storage... 00:31:16.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:16.868 05:26:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:16.868 05:26:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.868 05:26:53 -- nvmf/common.sh@7 -- # uname -s 00:31:16.868 05:26:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.868 05:26:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.868 05:26:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.868 05:26:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.868 05:26:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.868 05:26:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.868 05:26:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.868 05:26:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.868 05:26:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.868 05:26:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.868 05:26:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:16.868 05:26:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:16.868 05:26:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.868 05:26:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.868 05:26:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.868 05:26:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.868 05:26:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.868 05:26:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.868 05:26:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.868 05:26:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.868 05:26:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.868 05:26:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.868 05:26:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.868 05:26:53 -- paths/export.sh@5 -- # export PATH 00:31:16.868 05:26:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.868 05:26:53 -- nvmf/common.sh@47 -- # : 0 00:31:16.868 05:26:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:16.868 05:26:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:16.868 05:26:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.868 05:26:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.868 05:26:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.868 05:26:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:16.868 05:26:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:16.868 05:26:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:16.868 05:26:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:16.868 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 05:26:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:16.868 05:26:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2022935 00:31:16.868 05:26:53 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:16.868 05:26:53 -- spdkcli/common.sh@34 -- # waitforlisten 2022935 00:31:16.868 05:26:53 -- common/autotest_common.sh@817 -- # '[' -z 2022935 ']' 00:31:16.868 05:26:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.868 05:26:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:16.868 05:26:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.868 05:26:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:16.868 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 [2024-04-24 05:26:53.810000] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:31:16.868 [2024-04-24 05:26:53.810091] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022935 ] 00:31:16.868 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.868 [2024-04-24 05:26:53.841092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:16.868 [2024-04-24 05:26:53.867811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:16.868 [2024-04-24 05:26:53.952732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.868 [2024-04-24 05:26:53.952736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.868 05:26:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:16.868 05:26:54 -- common/autotest_common.sh@850 -- # return 0 00:31:16.868 05:26:54 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:16.868 05:26:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:16.868 05:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 05:26:54 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:16.868 05:26:54 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:16.868 05:26:54 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:16.868 05:26:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:16.868 05:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:16.868 05:26:54 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:16.868 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:16.868 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:16.868 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:16.868 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:16.868 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:16.868 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:16.868 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.868 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.868 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:16.868 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:16.869 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:16.869 ' 00:31:17.439 [2024-04-24 05:26:54.475603] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:19.980 [2024-04-24 05:26:56.651302] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.926 [2024-04-24 05:26:57.883639] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:23.461 [2024-04-24 05:27:00.170864] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:25.363 [2024-04-24 05:27:02.121000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:26.747 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:26.747 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:26.747 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.747 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.747 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:26.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:26.747 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:26.747 05:27:03 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:26.747 05:27:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:26.747 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:31:26.747 05:27:03 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:26.747 05:27:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:26.747 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:31:26.747 05:27:03 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:26.747 05:27:03 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:27.007 05:27:04 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:27.007 05:27:04 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:27.007 05:27:04 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:27.007 05:27:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:27.007 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:31:27.007 05:27:04 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:27.007 05:27:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:27.007 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:31:27.007 05:27:04 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:27.007 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:27.007 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.007 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:27.007 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:27.007 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:27.007 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:27.007 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:27.007 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:27.007 ' 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:32.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:32.278 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:32.278 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:32.278 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:32.278 05:27:09 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:32.278 05:27:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:32.278 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:32.278 05:27:09 -- spdkcli/nvmf.sh@90 -- # killprocess 2022935 00:31:32.278 05:27:09 -- common/autotest_common.sh@936 -- # '[' -z 2022935 ']' 00:31:32.278 05:27:09 -- common/autotest_common.sh@940 -- # kill -0 2022935 00:31:32.278 05:27:09 -- common/autotest_common.sh@941 -- # uname 00:31:32.278 05:27:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:32.278 05:27:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2022935 00:31:32.278 05:27:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:32.278 05:27:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:32.278 05:27:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2022935' 00:31:32.278 killing process with pid 2022935 00:31:32.278 05:27:09 -- common/autotest_common.sh@955 -- # kill 2022935 00:31:32.278 [2024-04-24 05:27:09.513026] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:32.278 05:27:09 -- common/autotest_common.sh@960 -- # wait 2022935 00:31:32.537 05:27:09 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:32.537 05:27:09 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:32.537 05:27:09 -- spdkcli/common.sh@13 -- # '[' -n 2022935 ']' 00:31:32.537 05:27:09 -- spdkcli/common.sh@14 -- # killprocess 2022935 00:31:32.537 05:27:09 -- common/autotest_common.sh@936 -- # '[' -z 2022935 ']' 00:31:32.537 05:27:09 -- common/autotest_common.sh@940 -- # kill -0 2022935 00:31:32.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2022935) - No such process 00:31:32.537 05:27:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2022935 is not found' 00:31:32.537 Process with pid 2022935 is not found 00:31:32.537 05:27:09 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:32.537 05:27:09 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:32.537 05:27:09 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:32.537 00:31:32.537 real 0m16.036s 00:31:32.537 user 0m33.908s 00:31:32.537 sys 0m0.823s 00:31:32.537 05:27:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:32.537 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:32.537 ************************************ 00:31:32.537 END TEST spdkcli_nvmf_tcp 00:31:32.537 ************************************ 00:31:32.537 05:27:09 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:32.537 05:27:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:32.537 05:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:32.537 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 ************************************ 00:31:32.796 START TEST nvmf_identify_passthru 00:31:32.796 ************************************ 00:31:32.796 05:27:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:32.796 * Looking for test storage... 00:31:32.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.796 05:27:09 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.796 05:27:09 -- nvmf/common.sh@7 -- # uname -s 00:31:32.796 05:27:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.796 05:27:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.796 05:27:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.796 05:27:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.796 05:27:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.796 05:27:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.796 05:27:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.796 05:27:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.796 05:27:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.796 05:27:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.796 05:27:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.796 05:27:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.796 05:27:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.796 05:27:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.796 05:27:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.796 05:27:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.796 05:27:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.796 05:27:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.796 05:27:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.796 05:27:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.796 05:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@5 -- # export PATH 00:31:32.796 05:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- nvmf/common.sh@47 -- # : 0 00:31:32.796 05:27:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:32.796 05:27:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:32.796 05:27:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.796 05:27:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.796 05:27:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.796 05:27:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:32.796 05:27:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:32.796 05:27:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:32.796 05:27:09 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.796 05:27:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.796 05:27:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.796 05:27:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.796 05:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- paths/export.sh@5 -- # export PATH 00:31:32.796 05:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.796 05:27:09 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:32.796 05:27:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:32.797 05:27:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.797 05:27:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:32.797 05:27:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:32.797 05:27:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:32.797 05:27:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.797 05:27:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:32.797 05:27:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.797 05:27:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:32.797 05:27:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:32.797 05:27:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:32.797 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:34.706 05:27:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:34.706 05:27:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:34.706 05:27:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:34.706 05:27:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:34.706 05:27:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:34.706 05:27:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:34.706 05:27:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:34.706 05:27:11 -- nvmf/common.sh@295 -- # net_devs=() 00:31:34.706 05:27:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:34.706 05:27:11 -- nvmf/common.sh@296 -- # e810=() 00:31:34.706 05:27:11 -- nvmf/common.sh@296 -- # local -ga e810 00:31:34.706 05:27:11 -- nvmf/common.sh@297 -- # x722=() 00:31:34.706 05:27:11 -- nvmf/common.sh@297 -- # local -ga x722 00:31:34.706 05:27:11 -- nvmf/common.sh@298 -- # mlx=() 00:31:34.706 05:27:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:34.706 05:27:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.706 05:27:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.707 05:27:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.707 05:27:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.707 05:27:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.707 05:27:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.707 05:27:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:34.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:34.707 05:27:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.707 05:27:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:34.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:34.707 05:27:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.707 05:27:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.707 05:27:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.707 05:27:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:34.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:34.707 05:27:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.707 05:27:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.707 05:27:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.707 05:27:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:34.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:34.707 05:27:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:34.707 05:27:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:34.707 05:27:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.707 05:27:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.707 05:27:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:34.707 05:27:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.707 05:27:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.707 05:27:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:34.707 05:27:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.707 05:27:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.707 05:27:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:34.707 05:27:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:34.707 05:27:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.707 05:27:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.707 05:27:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.707 05:27:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.707 05:27:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:34.707 05:27:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.707 05:27:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.707 05:27:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.707 05:27:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:34.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:31:34.707 00:31:34.707 --- 10.0.0.2 ping statistics --- 00:31:34.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.707 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:34.707 05:27:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:31:34.707 00:31:34.707 --- 10.0.0.1 ping statistics --- 00:31:34.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.707 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:31:34.707 05:27:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.707 05:27:11 -- nvmf/common.sh@411 -- # return 0 00:31:34.707 05:27:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:34.707 05:27:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.707 05:27:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:34.707 05:27:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.707 05:27:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:34.707 05:27:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:34.707 05:27:11 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:34.707 05:27:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:34.707 05:27:11 -- common/autotest_common.sh@10 -- # set +x 00:31:34.707 05:27:11 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:34.707 05:27:11 -- common/autotest_common.sh@1510 -- # bdfs=() 00:31:34.707 05:27:11 -- common/autotest_common.sh@1510 -- # local bdfs 00:31:34.707 05:27:11 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:31:34.707 05:27:11 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:31:34.707 05:27:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:31:34.707 05:27:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:31:34.707 05:27:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:34.707 05:27:11 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:34.707 05:27:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:31:34.707 05:27:11 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:31:34.707 05:27:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:31:34.707 05:27:11 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:31:34.707 05:27:11 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:34.707 05:27:11 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:34.707 05:27:11 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:34.707 05:27:11 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:34.707 05:27:11 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:34.707 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.905 05:27:16 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:38.905 05:27:16 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:38.905 05:27:16 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:38.905 05:27:16 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:38.905 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.101 05:27:20 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:43.101 05:27:20 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:43.101 05:27:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:43.101 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.101 05:27:20 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:43.101 05:27:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:43.101 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.101 05:27:20 -- target/identify_passthru.sh@31 -- # nvmfpid=2028172 00:31:43.101 05:27:20 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:43.101 05:27:20 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.101 05:27:20 -- target/identify_passthru.sh@35 -- # waitforlisten 2028172 00:31:43.101 05:27:20 -- common/autotest_common.sh@817 -- # '[' -z 2028172 ']' 00:31:43.101 05:27:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.101 05:27:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:43.101 05:27:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.101 05:27:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:43.101 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.101 [2024-04-24 05:27:20.359382] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:31:43.101 [2024-04-24 05:27:20.359470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.360 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.361 [2024-04-24 05:27:20.399522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:43.361 [2024-04-24 05:27:20.425860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.361 [2024-04-24 05:27:20.513540] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.361 [2024-04-24 05:27:20.513603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.361 [2024-04-24 05:27:20.513617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.361 [2024-04-24 05:27:20.513635] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.361 [2024-04-24 05:27:20.513647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.361 [2024-04-24 05:27:20.513702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.361 [2024-04-24 05:27:20.513761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.361 [2024-04-24 05:27:20.513825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.361 [2024-04-24 05:27:20.513828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.361 05:27:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:43.361 05:27:20 -- common/autotest_common.sh@850 -- # return 0 00:31:43.361 05:27:20 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:43.361 05:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.361 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.361 INFO: Log level set to 20 00:31:43.361 INFO: Requests: 00:31:43.361 { 00:31:43.361 "jsonrpc": "2.0", 00:31:43.361 "method": "nvmf_set_config", 00:31:43.361 "id": 1, 00:31:43.361 "params": { 00:31:43.361 "admin_cmd_passthru": { 00:31:43.361 "identify_ctrlr": true 00:31:43.361 } 00:31:43.361 } 00:31:43.361 } 00:31:43.361 00:31:43.361 INFO: response: 00:31:43.361 { 00:31:43.361 "jsonrpc": "2.0", 00:31:43.361 "id": 1, 00:31:43.361 "result": true 00:31:43.361 } 00:31:43.361 00:31:43.361 05:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.361 05:27:20 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:43.361 05:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.361 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.361 INFO: Setting log level to 20 00:31:43.361 INFO: Setting log level to 20 00:31:43.361 INFO: Log level set to 20 00:31:43.361 INFO: Log level set to 20 00:31:43.361 INFO: Requests: 00:31:43.361 { 00:31:43.361 "jsonrpc": "2.0", 00:31:43.361 "method": "framework_start_init", 00:31:43.361 "id": 1 00:31:43.361 } 00:31:43.361 00:31:43.361 INFO: Requests: 00:31:43.361 { 00:31:43.361 "jsonrpc": "2.0", 00:31:43.361 "method": "framework_start_init", 00:31:43.361 "id": 1 00:31:43.361 } 00:31:43.361 00:31:43.618 [2024-04-24 05:27:20.681927] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:43.618 INFO: response: 00:31:43.618 { 00:31:43.618 "jsonrpc": "2.0", 00:31:43.618 "id": 1, 00:31:43.618 "result": true 00:31:43.618 } 00:31:43.618 00:31:43.618 INFO: response: 00:31:43.618 { 00:31:43.618 "jsonrpc": "2.0", 00:31:43.618 "id": 1, 00:31:43.619 "result": true 00:31:43.619 } 00:31:43.619 00:31:43.619 05:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.619 05:27:20 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.619 05:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.619 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.619 INFO: Setting log level to 40 00:31:43.619 INFO: Setting log level to 40 00:31:43.619 INFO: Setting log level to 40 00:31:43.619 [2024-04-24 05:27:20.692009] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.619 05:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.619 05:27:20 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:43.619 05:27:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:43.619 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:43.619 05:27:20 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:43.619 05:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.619 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:31:46.938 Nvme0n1 00:31:46.938 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.938 05:27:23 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:46.939 05:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.939 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.939 05:27:23 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:46.939 05:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.939 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.939 05:27:23 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.939 05:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.939 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 [2024-04-24 05:27:23.583562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.939 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.939 05:27:23 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:46.939 05:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.939 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 [2024-04-24 05:27:23.591324] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:46.939 [ 00:31:46.939 { 00:31:46.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:46.939 "subtype": "Discovery", 00:31:46.939 "listen_addresses": [], 00:31:46.939 "allow_any_host": true, 00:31:46.939 "hosts": [] 00:31:46.939 }, 00:31:46.939 { 00:31:46.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:46.939 "subtype": "NVMe", 00:31:46.939 "listen_addresses": [ 00:31:46.939 { 00:31:46.939 "transport": "TCP", 00:31:46.939 "trtype": "TCP", 00:31:46.939 "adrfam": "IPv4", 00:31:46.939 "traddr": "10.0.0.2", 00:31:46.939 "trsvcid": "4420" 00:31:46.939 } 00:31:46.939 ], 00:31:46.939 "allow_any_host": true, 00:31:46.939 "hosts": [], 00:31:46.939 "serial_number": "SPDK00000000000001", 00:31:46.939 "model_number": "SPDK bdev Controller", 00:31:46.939 "max_namespaces": 1, 00:31:46.939 "min_cntlid": 1, 00:31:46.939 "max_cntlid": 65519, 00:31:46.939 "namespaces": [ 00:31:46.939 { 00:31:46.939 "nsid": 1, 00:31:46.939 "bdev_name": "Nvme0n1", 00:31:46.939 "name": "Nvme0n1", 00:31:46.939 "nguid": "21A6097CE7E94AC68DA4838064416E79", 00:31:46.939 "uuid": "21a6097c-e7e9-4ac6-8da4-838064416e79" 00:31:46.939 } 00:31:46.939 ] 00:31:46.939 } 00:31:46.939 ] 00:31:46.939 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.939 05:27:23 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:46.939 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.939 05:27:23 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:46.939 05:27:23 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:46.939 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.939 05:27:23 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:46.939 05:27:23 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:46.939 05:27:23 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.939 05:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.939 05:27:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 05:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.939 05:27:23 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:46.939 05:27:23 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:46.939 05:27:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:46.939 05:27:23 -- nvmf/common.sh@117 -- # sync 00:31:46.939 05:27:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:46.939 05:27:23 -- nvmf/common.sh@120 -- # set +e 00:31:46.939 05:27:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:46.939 05:27:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:46.939 rmmod nvme_tcp 00:31:46.939 rmmod nvme_fabrics 00:31:46.939 rmmod nvme_keyring 00:31:46.939 05:27:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:46.939 05:27:23 -- nvmf/common.sh@124 -- # set -e 00:31:46.939 05:27:23 -- nvmf/common.sh@125 -- # return 0 00:31:46.939 05:27:23 -- nvmf/common.sh@478 -- # '[' -n 2028172 ']' 00:31:46.939 05:27:23 -- nvmf/common.sh@479 -- # killprocess 2028172 00:31:46.939 05:27:23 -- common/autotest_common.sh@936 -- # '[' -z 2028172 ']' 00:31:46.939 05:27:23 -- common/autotest_common.sh@940 -- # kill -0 2028172 00:31:46.939 05:27:23 -- common/autotest_common.sh@941 -- # uname 00:31:46.939 05:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:46.939 05:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2028172 00:31:46.939 05:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:46.939 05:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:46.939 05:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2028172' 00:31:46.939 killing process with pid 2028172 00:31:46.939 05:27:23 -- common/autotest_common.sh@955 -- # kill 2028172 00:31:46.939 [2024-04-24 05:27:23.935021] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:46.939 05:27:23 -- common/autotest_common.sh@960 -- # wait 2028172 00:31:48.316 05:27:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:48.316 05:27:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:48.316 05:27:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:48.316 05:27:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.316 05:27:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.316 05:27:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.316 05:27:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:48.316 05:27:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.848 05:27:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.848 00:31:50.848 real 0m17.678s 00:31:50.848 user 0m26.223s 00:31:50.848 sys 0m2.184s 00:31:50.848 05:27:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:50.848 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:31:50.848 ************************************ 00:31:50.848 END TEST nvmf_identify_passthru 00:31:50.848 ************************************ 00:31:50.848 05:27:27 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:50.848 05:27:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:50.848 05:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.848 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:31:50.848 ************************************ 00:31:50.848 START TEST nvmf_dif 00:31:50.848 ************************************ 00:31:50.848 05:27:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:50.848 * Looking for test storage... 00:31:50.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.848 05:27:27 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.848 05:27:27 -- nvmf/common.sh@7 -- # uname -s 00:31:50.848 05:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.848 05:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.848 05:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.848 05:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.848 05:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.848 05:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.848 05:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.848 05:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.848 05:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.848 05:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.848 05:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:50.848 05:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:50.848 05:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.848 05:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.848 05:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.848 05:27:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.848 05:27:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.849 05:27:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.849 05:27:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.849 05:27:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.849 05:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.849 05:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.849 05:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.849 05:27:27 -- paths/export.sh@5 -- # export PATH 00:31:50.849 05:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.849 05:27:27 -- nvmf/common.sh@47 -- # : 0 00:31:50.849 05:27:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:50.849 05:27:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:50.849 05:27:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.849 05:27:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.849 05:27:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.849 05:27:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:50.849 05:27:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:50.849 05:27:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:50.849 05:27:27 -- target/dif.sh@15 -- # NULL_META=16 00:31:50.849 05:27:27 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:50.849 05:27:27 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:50.849 05:27:27 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:50.849 05:27:27 -- target/dif.sh@135 -- # nvmftestinit 00:31:50.849 05:27:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:50.849 05:27:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.849 05:27:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:50.849 05:27:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:50.849 05:27:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:50.849 05:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.849 05:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:50.849 05:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.849 05:27:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:50.849 05:27:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:50.849 05:27:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:50.849 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:31:52.752 05:27:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:52.752 05:27:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:52.752 05:27:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:52.752 05:27:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:52.752 05:27:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:52.752 05:27:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:52.752 05:27:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:52.752 05:27:29 -- nvmf/common.sh@295 -- # net_devs=() 00:31:52.752 05:27:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:52.752 05:27:29 -- nvmf/common.sh@296 -- # e810=() 00:31:52.752 05:27:29 -- nvmf/common.sh@296 -- # local -ga e810 00:31:52.752 05:27:29 -- nvmf/common.sh@297 -- # x722=() 00:31:52.752 05:27:29 -- nvmf/common.sh@297 -- # local -ga x722 00:31:52.752 05:27:29 -- nvmf/common.sh@298 -- # mlx=() 00:31:52.752 05:27:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:52.752 05:27:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.752 05:27:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:52.752 05:27:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:52.752 05:27:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:52.752 05:27:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.752 05:27:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:52.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:52.752 05:27:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.752 05:27:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:52.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:52.752 05:27:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:52.752 05:27:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:52.752 05:27:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.752 05:27:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.752 05:27:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:52.752 05:27:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.752 05:27:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:52.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:52.753 05:27:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.753 05:27:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.753 05:27:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.753 05:27:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:52.753 05:27:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.753 05:27:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:52.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:52.753 05:27:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.753 05:27:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:52.753 05:27:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:52.753 05:27:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:52.753 05:27:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:52.753 05:27:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:52.753 05:27:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.753 05:27:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.753 05:27:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.753 05:27:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:52.753 05:27:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.753 05:27:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.753 05:27:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:52.753 05:27:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.753 05:27:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.753 05:27:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:52.753 05:27:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:52.753 05:27:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.753 05:27:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.753 05:27:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.753 05:27:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.753 05:27:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:52.753 05:27:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.753 05:27:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.753 05:27:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.753 05:27:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:52.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:31:52.753 00:31:52.753 --- 10.0.0.2 ping statistics --- 00:31:52.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.753 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:31:52.753 05:27:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:31:52.753 00:31:52.753 --- 10.0.0.1 ping statistics --- 00:31:52.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.753 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:31:52.753 05:27:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.753 05:27:29 -- nvmf/common.sh@411 -- # return 0 00:31:52.753 05:27:29 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:52.753 05:27:29 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:53.690 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:53.690 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:53.690 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:53.690 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:53.690 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:53.690 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:53.690 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:53.690 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:53.690 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:53.690 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:53.690 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:53.690 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:53.690 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:53.690 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:53.690 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:53.690 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:53.690 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:53.950 05:27:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.950 05:27:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:53.950 05:27:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:53.950 05:27:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.950 05:27:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:53.950 05:27:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:53.950 05:27:31 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:53.950 05:27:31 -- target/dif.sh@137 -- # nvmfappstart 00:31:53.950 05:27:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:53.950 05:27:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:53.950 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:53.950 05:27:31 -- nvmf/common.sh@470 -- # nvmfpid=2031329 00:31:53.950 05:27:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:53.950 05:27:31 -- nvmf/common.sh@471 -- # waitforlisten 2031329 00:31:53.950 05:27:31 -- common/autotest_common.sh@817 -- # '[' -z 2031329 ']' 00:31:53.950 05:27:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.950 05:27:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:53.950 05:27:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.950 05:27:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:53.950 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:53.950 [2024-04-24 05:27:31.086042] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:31:53.950 [2024-04-24 05:27:31.086123] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.950 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.950 [2024-04-24 05:27:31.124109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:53.950 [2024-04-24 05:27:31.151852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.210 [2024-04-24 05:27:31.237522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.210 [2024-04-24 05:27:31.237585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.210 [2024-04-24 05:27:31.237598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.210 [2024-04-24 05:27:31.237625] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.210 [2024-04-24 05:27:31.237650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.210 [2024-04-24 05:27:31.237683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.210 05:27:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:54.210 05:27:31 -- common/autotest_common.sh@850 -- # return 0 00:31:54.210 05:27:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:54.210 05:27:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:54.210 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.210 05:27:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.210 05:27:31 -- target/dif.sh@139 -- # create_transport 00:31:54.210 05:27:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:54.210 05:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.210 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.210 [2024-04-24 05:27:31.378879] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.210 05:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.210 05:27:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:54.210 05:27:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:54.210 05:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:54.210 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 ************************************ 00:31:54.471 START TEST fio_dif_1_default 00:31:54.471 ************************************ 00:31:54.471 05:27:31 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:31:54.471 05:27:31 -- target/dif.sh@86 -- # create_subsystems 0 00:31:54.471 05:27:31 -- target/dif.sh@28 -- # local sub 00:31:54.471 05:27:31 -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.471 05:27:31 -- target/dif.sh@31 -- # create_subsystem 0 00:31:54.471 05:27:31 -- target/dif.sh@18 -- # local sub_id=0 00:31:54.471 05:27:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:54.471 05:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 bdev_null0 00:31:54.471 05:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 05:27:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:54.471 05:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 05:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 05:27:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:54.471 05:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 05:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 05:27:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.471 05:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 [2024-04-24 05:27:31.515399] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.471 05:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 05:27:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:54.471 05:27:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:54.471 05:27:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:54.471 05:27:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.471 05:27:31 -- nvmf/common.sh@521 -- # config=() 00:31:54.471 05:27:31 -- nvmf/common.sh@521 -- # local subsystem config 00:31:54.471 05:27:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.471 05:27:31 -- target/dif.sh@82 -- # gen_fio_conf 00:31:54.471 05:27:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:54.471 05:27:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:54.471 05:27:31 -- target/dif.sh@54 -- # local file 00:31:54.471 05:27:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:54.471 { 00:31:54.471 "params": { 00:31:54.471 "name": "Nvme$subsystem", 00:31:54.471 "trtype": "$TEST_TRANSPORT", 00:31:54.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.471 "adrfam": "ipv4", 00:31:54.471 "trsvcid": "$NVMF_PORT", 00:31:54.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.471 "hdgst": ${hdgst:-false}, 00:31:54.471 "ddgst": ${ddgst:-false} 00:31:54.471 }, 00:31:54.472 "method": "bdev_nvme_attach_controller" 00:31:54.472 } 00:31:54.472 EOF 00:31:54.472 )") 00:31:54.472 05:27:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.472 05:27:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:54.472 05:27:31 -- target/dif.sh@56 -- # cat 00:31:54.472 05:27:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.472 05:27:31 -- common/autotest_common.sh@1327 -- # shift 00:31:54.472 05:27:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:54.472 05:27:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.472 05:27:31 -- nvmf/common.sh@543 -- # cat 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.472 05:27:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:54.472 05:27:31 -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:54.472 05:27:31 -- nvmf/common.sh@545 -- # jq . 00:31:54.472 05:27:31 -- nvmf/common.sh@546 -- # IFS=, 00:31:54.472 05:27:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:54.472 "params": { 00:31:54.472 "name": "Nvme0", 00:31:54.472 "trtype": "tcp", 00:31:54.472 "traddr": "10.0.0.2", 00:31:54.472 "adrfam": "ipv4", 00:31:54.472 "trsvcid": "4420", 00:31:54.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.472 "hdgst": false, 00:31:54.472 "ddgst": false 00:31:54.472 }, 00:31:54.472 "method": "bdev_nvme_attach_controller" 00:31:54.472 }' 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:54.472 05:27:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:54.472 05:27:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:54.472 05:27:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:54.472 05:27:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:54.472 05:27:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:54.472 05:27:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:54.732 fio-3.35 00:31:54.732 Starting 1 thread 00:31:54.732 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.930 00:32:06.930 filename0: (groupid=0, jobs=1): err= 0: pid=2031563: Wed Apr 24 05:27:42 2024 00:32:06.930 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:32:06.930 slat (nsec): min=4269, max=79389, avg=8159.24, stdev=3521.53 00:32:06.930 clat (usec): min=40885, max=47818, avg=41014.97, stdev=447.37 00:32:06.930 lat (usec): min=40891, max=47847, avg=41023.13, stdev=447.77 00:32:06.930 clat percentiles (usec): 00:32:06.930 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:06.930 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.930 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:06.930 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:32:06.930 | 99.99th=[47973] 00:32:06.930 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:32:06.930 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:06.930 lat (msec) : 50=100.00% 00:32:06.930 cpu : usr=89.24%, sys=10.48%, ctx=20, majf=0, minf=307 00:32:06.930 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.930 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.930 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:06.930 00:32:06.930 Run status group 0 (all jobs): 00:32:06.930 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:32:06.930 05:27:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:06.930 05:27:42 -- target/dif.sh@43 -- # local sub 00:32:06.930 05:27:42 -- target/dif.sh@45 -- # for sub in "$@" 00:32:06.930 05:27:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:06.930 05:27:42 -- target/dif.sh@36 -- # local sub_id=0 00:32:06.930 05:27:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:06.930 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.930 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.930 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.930 05:27:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:06.930 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.930 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.930 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.930 00:32:06.930 real 0m11.037s 00:32:06.930 user 0m10.034s 00:32:06.930 sys 0m1.325s 00:32:06.930 05:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:06.930 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.930 ************************************ 00:32:06.930 END TEST fio_dif_1_default 00:32:06.930 ************************************ 00:32:06.930 05:27:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:06.930 05:27:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:06.930 05:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.930 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.930 ************************************ 00:32:06.930 START TEST fio_dif_1_multi_subsystems 00:32:06.930 ************************************ 00:32:06.930 05:27:42 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:32:06.930 05:27:42 -- target/dif.sh@92 -- # local files=1 00:32:06.931 05:27:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:06.931 05:27:42 -- target/dif.sh@28 -- # local sub 00:32:06.931 05:27:42 -- target/dif.sh@30 -- # for sub in "$@" 00:32:06.931 05:27:42 -- target/dif.sh@31 -- # create_subsystem 0 00:32:06.931 05:27:42 -- target/dif.sh@18 -- # local sub_id=0 00:32:06.931 05:27:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 bdev_null0 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 [2024-04-24 05:27:42.671791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@30 -- # for sub in "$@" 00:32:06.931 05:27:42 -- target/dif.sh@31 -- # create_subsystem 1 00:32:06.931 05:27:42 -- target/dif.sh@18 -- # local sub_id=1 00:32:06.931 05:27:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 bdev_null1 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.931 05:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:06.931 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:06.931 05:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:06.931 05:27:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:06.931 05:27:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:06.931 05:27:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:06.931 05:27:42 -- nvmf/common.sh@521 -- # config=() 00:32:06.931 05:27:42 -- nvmf/common.sh@521 -- # local subsystem config 00:32:06.931 05:27:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:06.931 05:27:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.931 05:27:42 -- target/dif.sh@82 -- # gen_fio_conf 00:32:06.931 05:27:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:06.931 { 00:32:06.931 "params": { 00:32:06.931 "name": "Nvme$subsystem", 00:32:06.931 "trtype": "$TEST_TRANSPORT", 00:32:06.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.931 "adrfam": "ipv4", 00:32:06.931 "trsvcid": "$NVMF_PORT", 00:32:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.931 "hdgst": ${hdgst:-false}, 00:32:06.931 "ddgst": ${ddgst:-false} 00:32:06.931 }, 00:32:06.931 "method": "bdev_nvme_attach_controller" 00:32:06.931 } 00:32:06.931 EOF 00:32:06.931 )") 00:32:06.931 05:27:42 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.931 05:27:42 -- target/dif.sh@54 -- # local file 00:32:06.931 05:27:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:06.931 05:27:42 -- target/dif.sh@56 -- # cat 00:32:06.931 05:27:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:06.931 05:27:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:06.931 05:27:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.931 05:27:42 -- common/autotest_common.sh@1327 -- # shift 00:32:06.931 05:27:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:06.931 05:27:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.931 05:27:42 -- nvmf/common.sh@543 -- # cat 00:32:06.931 05:27:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.931 05:27:42 -- target/dif.sh@72 -- # (( file <= files )) 00:32:06.931 05:27:42 -- target/dif.sh@73 -- # cat 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:06.931 05:27:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:06.931 05:27:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:06.931 { 00:32:06.931 "params": { 00:32:06.931 "name": "Nvme$subsystem", 00:32:06.931 "trtype": "$TEST_TRANSPORT", 00:32:06.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.931 "adrfam": "ipv4", 00:32:06.931 "trsvcid": "$NVMF_PORT", 00:32:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.931 "hdgst": ${hdgst:-false}, 00:32:06.931 "ddgst": ${ddgst:-false} 00:32:06.931 }, 00:32:06.931 "method": "bdev_nvme_attach_controller" 00:32:06.931 } 00:32:06.931 EOF 00:32:06.931 )") 00:32:06.931 05:27:42 -- target/dif.sh@72 -- # (( file++ )) 00:32:06.931 05:27:42 -- target/dif.sh@72 -- # (( file <= files )) 00:32:06.931 05:27:42 -- nvmf/common.sh@543 -- # cat 00:32:06.931 05:27:42 -- nvmf/common.sh@545 -- # jq . 00:32:06.931 05:27:42 -- nvmf/common.sh@546 -- # IFS=, 00:32:06.931 05:27:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:06.931 "params": { 00:32:06.931 "name": "Nvme0", 00:32:06.931 "trtype": "tcp", 00:32:06.931 "traddr": "10.0.0.2", 00:32:06.931 "adrfam": "ipv4", 00:32:06.931 "trsvcid": "4420", 00:32:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.931 "hdgst": false, 00:32:06.931 "ddgst": false 00:32:06.931 }, 00:32:06.931 "method": "bdev_nvme_attach_controller" 00:32:06.931 },{ 00:32:06.931 "params": { 00:32:06.931 "name": "Nvme1", 00:32:06.931 "trtype": "tcp", 00:32:06.931 "traddr": "10.0.0.2", 00:32:06.931 "adrfam": "ipv4", 00:32:06.931 "trsvcid": "4420", 00:32:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.931 "hdgst": false, 00:32:06.931 "ddgst": false 00:32:06.931 }, 00:32:06.931 "method": "bdev_nvme_attach_controller" 00:32:06.931 }' 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:06.931 05:27:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:06.931 05:27:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:06.931 05:27:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:06.931 05:27:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:06.931 05:27:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:06.931 05:27:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:06.931 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:06.931 fio-3.35 00:32:06.931 Starting 2 threads 00:32:06.931 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.905 00:32:16.905 filename0: (groupid=0, jobs=1): err= 0: pid=2032969: Wed Apr 24 05:27:53 2024 00:32:16.905 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:32:16.905 slat (nsec): min=6956, max=54818, avg=9592.40, stdev=2901.32 00:32:16.905 clat (usec): min=40862, max=45256, avg=40995.58, stdev=285.09 00:32:16.905 lat (usec): min=40870, max=45275, avg=41005.18, stdev=285.50 00:32:16.905 clat percentiles (usec): 00:32:16.905 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:16.905 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.905 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.905 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:32:16.905 | 99.99th=[45351] 00:32:16.905 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:32:16.905 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:16.905 lat (msec) : 50=100.00% 00:32:16.905 cpu : usr=94.35%, sys=5.31%, ctx=35, majf=0, minf=120 00:32:16.905 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.905 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.905 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:16.905 filename1: (groupid=0, jobs=1): err= 0: pid=2032970: Wed Apr 24 05:27:53 2024 00:32:16.905 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:32:16.905 slat (nsec): min=7401, max=34819, avg=9492.38, stdev=2602.52 00:32:16.905 clat (usec): min=40883, max=45281, avg=40992.08, stdev=281.71 00:32:16.905 lat (usec): min=40891, max=45300, avg=41001.57, stdev=282.00 00:32:16.905 clat percentiles (usec): 00:32:16.905 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:16.905 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.905 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.905 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:32:16.905 | 99.99th=[45351] 00:32:16.905 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:32:16.905 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:16.905 lat (msec) : 50=100.00% 00:32:16.906 cpu : usr=94.68%, sys=5.05%, ctx=10, majf=0, minf=135 00:32:16.906 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.906 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.906 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:16.906 00:32:16.906 Run status group 0 (all jobs): 00:32:16.906 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10011msec 00:32:16.906 05:27:54 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:16.906 05:27:54 -- target/dif.sh@43 -- # local sub 00:32:16.906 05:27:54 -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.906 05:27:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:16.906 05:27:54 -- target/dif.sh@36 -- # local sub_id=0 00:32:16.906 05:27:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 05:27:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 05:27:54 -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.906 05:27:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:16.906 05:27:54 -- target/dif.sh@36 -- # local sub_id=1 00:32:16.906 05:27:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 05:27:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 00:32:16.906 real 0m11.399s 00:32:16.906 user 0m20.299s 00:32:16.906 sys 0m1.346s 00:32:16.906 05:27:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 ************************************ 00:32:16.906 END TEST fio_dif_1_multi_subsystems 00:32:16.906 ************************************ 00:32:16.906 05:27:54 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:16.906 05:27:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:16.906 05:27:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 ************************************ 00:32:16.906 START TEST fio_dif_rand_params 00:32:16.906 ************************************ 00:32:16.906 05:27:54 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:32:16.906 05:27:54 -- target/dif.sh@100 -- # local NULL_DIF 00:32:16.906 05:27:54 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:16.906 05:27:54 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:16.906 05:27:54 -- target/dif.sh@103 -- # bs=128k 00:32:16.906 05:27:54 -- target/dif.sh@103 -- # numjobs=3 00:32:16.906 05:27:54 -- target/dif.sh@103 -- # iodepth=3 00:32:16.906 05:27:54 -- target/dif.sh@103 -- # runtime=5 00:32:16.906 05:27:54 -- target/dif.sh@105 -- # create_subsystems 0 00:32:16.906 05:27:54 -- target/dif.sh@28 -- # local sub 00:32:16.906 05:27:54 -- target/dif.sh@30 -- # for sub in "$@" 00:32:16.906 05:27:54 -- target/dif.sh@31 -- # create_subsystem 0 00:32:16.906 05:27:54 -- target/dif.sh@18 -- # local sub_id=0 00:32:16.906 05:27:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 bdev_null0 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 05:27:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.906 05:27:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:16.906 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.906 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:17.165 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.165 05:27:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.165 05:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.165 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:17.165 [2024-04-24 05:27:54.185477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.165 05:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.165 05:27:54 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:17.165 05:27:54 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:17.165 05:27:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:17.165 05:27:54 -- nvmf/common.sh@521 -- # config=() 00:32:17.165 05:27:54 -- nvmf/common.sh@521 -- # local subsystem config 00:32:17.165 05:27:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:17.165 05:27:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.165 05:27:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:17.165 { 00:32:17.165 "params": { 00:32:17.165 "name": "Nvme$subsystem", 00:32:17.165 "trtype": "$TEST_TRANSPORT", 00:32:17.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.165 "adrfam": "ipv4", 00:32:17.165 "trsvcid": "$NVMF_PORT", 00:32:17.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.165 "hdgst": ${hdgst:-false}, 00:32:17.165 "ddgst": ${ddgst:-false} 00:32:17.165 }, 00:32:17.165 "method": "bdev_nvme_attach_controller" 00:32:17.165 } 00:32:17.165 EOF 00:32:17.165 )") 00:32:17.165 05:27:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.165 05:27:54 -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.165 05:27:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:17.165 05:27:54 -- target/dif.sh@54 -- # local file 00:32:17.165 05:27:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.165 05:27:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:17.165 05:27:54 -- target/dif.sh@56 -- # cat 00:32:17.165 05:27:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.165 05:27:54 -- common/autotest_common.sh@1327 -- # shift 00:32:17.165 05:27:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:17.165 05:27:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.165 05:27:54 -- nvmf/common.sh@543 -- # cat 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.165 05:27:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:17.165 05:27:54 -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:17.165 05:27:54 -- nvmf/common.sh@545 -- # jq . 00:32:17.165 05:27:54 -- nvmf/common.sh@546 -- # IFS=, 00:32:17.165 05:27:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:17.165 "params": { 00:32:17.165 "name": "Nvme0", 00:32:17.165 "trtype": "tcp", 00:32:17.165 "traddr": "10.0.0.2", 00:32:17.165 "adrfam": "ipv4", 00:32:17.165 "trsvcid": "4420", 00:32:17.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.165 "hdgst": false, 00:32:17.165 "ddgst": false 00:32:17.165 }, 00:32:17.165 "method": "bdev_nvme_attach_controller" 00:32:17.165 }' 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:17.165 05:27:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:17.165 05:27:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:17.165 05:27:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:17.165 05:27:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:17.165 05:27:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.165 05:27:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.423 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:17.423 ... 00:32:17.423 fio-3.35 00:32:17.423 Starting 3 threads 00:32:17.423 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.985 00:32:23.985 filename0: (groupid=0, jobs=1): err= 0: pid=2034384: Wed Apr 24 05:28:00 2024 00:32:23.985 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5006msec) 00:32:23.985 slat (nsec): min=6232, max=36223, avg=12038.45, stdev=3129.97 00:32:23.985 clat (usec): min=5610, max=57513, avg=14790.57, stdev=11868.97 00:32:23.985 lat (usec): min=5622, max=57521, avg=14802.61, stdev=11868.92 00:32:23.985 clat percentiles (usec): 00:32:23.985 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 8979], 00:32:23.985 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11469], 60.00th=[12649], 00:32:23.985 | 70.00th=[13698], 80.00th=[14746], 90.00th=[17171], 95.00th=[51643], 00:32:23.985 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56361], 99.95th=[57410], 00:32:23.985 | 99.99th=[57410] 00:32:23.985 bw ( KiB/s): min=18432, max=34560, per=34.06%, avg=25907.20, stdev=5043.77, samples=10 00:32:23.985 iops : min= 144, max= 270, avg=202.40, stdev=39.40, samples=10 00:32:23.985 lat (msec) : 10=32.84%, 20=58.58%, 50=1.18%, 100=7.40% 00:32:23.985 cpu : usr=90.57%, sys=9.03%, ctx=10, majf=0, minf=105 00:32:23.985 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.985 issued rwts: total=1014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.985 filename0: (groupid=0, jobs=1): err= 0: pid=2034385: Wed Apr 24 05:28:00 2024 00:32:23.985 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(120MiB/5047msec) 00:32:23.985 slat (nsec): min=5654, max=33409, avg=12069.17, stdev=3543.87 00:32:23.985 clat (usec): min=5300, max=92079, avg=15776.08, stdev=13118.27 00:32:23.985 lat (usec): min=5312, max=92091, avg=15788.15, stdev=13118.19 00:32:23.985 clat percentiles (usec): 00:32:23.985 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 8094], 20.00th=[ 9110], 00:32:23.985 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11994], 60.00th=[12911], 00:32:23.985 | 70.00th=[13698], 80.00th=[14877], 90.00th=[48497], 95.00th=[52167], 00:32:23.985 | 99.00th=[55837], 99.50th=[57934], 99.90th=[91751], 99.95th=[91751], 00:32:23.985 | 99.99th=[91751] 00:32:23.985 bw ( KiB/s): min=18944, max=31488, per=32.08%, avg=24403.00, stdev=4719.33, samples=10 00:32:23.985 iops : min= 148, max= 246, avg=190.60, stdev=36.79, samples=10 00:32:23.985 lat (msec) : 10=33.68%, 20=55.54%, 50=2.51%, 100=8.26% 00:32:23.985 cpu : usr=90.69%, sys=8.88%, ctx=9, majf=0, minf=76 00:32:23.985 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.985 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.985 filename0: (groupid=0, jobs=1): err= 0: pid=2034386: Wed Apr 24 05:28:00 2024 00:32:23.985 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5008msec) 00:32:23.985 slat (nsec): min=7315, max=62777, avg=12150.35, stdev=3883.26 00:32:23.985 clat (usec): min=5630, max=57533, avg=14580.13, stdev=11613.21 00:32:23.985 lat (usec): min=5641, max=57546, avg=14592.29, stdev=11613.24 00:32:23.986 clat percentiles (usec): 00:32:23.986 | 1.00th=[ 6128], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 9110], 00:32:23.986 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11338], 60.00th=[12387], 00:32:23.986 | 70.00th=[13304], 80.00th=[14353], 90.00th=[16909], 95.00th=[51119], 00:32:23.986 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56886], 99.95th=[57410], 00:32:23.986 | 99.99th=[57410] 00:32:23.986 bw ( KiB/s): min=22528, max=29952, per=34.53%, avg=26265.60, stdev=2268.01, samples=10 00:32:23.986 iops : min= 176, max= 234, avg=205.20, stdev=17.72, samples=10 00:32:23.986 lat (msec) : 10=35.67%, 20=55.88%, 50=2.14%, 100=6.32% 00:32:23.986 cpu : usr=90.17%, sys=9.39%, ctx=9, majf=0, minf=107 00:32:23.986 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.986 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.986 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:23.986 00:32:23.986 Run status group 0 (all jobs): 00:32:23.986 READ: bw=74.3MiB/s (77.9MB/s), 23.7MiB/s-25.7MiB/s (24.8MB/s-26.9MB/s), io=375MiB (393MB), run=5006-5047msec 00:32:23.986 05:28:00 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:23.986 05:28:00 -- target/dif.sh@43 -- # local sub 00:32:23.986 05:28:00 -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.986 05:28:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:23.986 05:28:00 -- target/dif.sh@36 -- # local sub_id=0 00:32:23.986 05:28:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # bs=4k 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # numjobs=8 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # iodepth=16 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # runtime= 00:32:23.986 05:28:00 -- target/dif.sh@109 -- # files=2 00:32:23.986 05:28:00 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:23.986 05:28:00 -- target/dif.sh@28 -- # local sub 00:32:23.986 05:28:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.986 05:28:00 -- target/dif.sh@31 -- # create_subsystem 0 00:32:23.986 05:28:00 -- target/dif.sh@18 -- # local sub_id=0 00:32:23.986 05:28:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 bdev_null0 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 [2024-04-24 05:28:00.466303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.986 05:28:00 -- target/dif.sh@31 -- # create_subsystem 1 00:32:23.986 05:28:00 -- target/dif.sh@18 -- # local sub_id=1 00:32:23.986 05:28:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 bdev_null1 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.986 05:28:00 -- target/dif.sh@31 -- # create_subsystem 2 00:32:23.986 05:28:00 -- target/dif.sh@18 -- # local sub_id=2 00:32:23.986 05:28:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 bdev_null2 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:23.986 05:28:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.986 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.986 05:28:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.986 05:28:00 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:23.986 05:28:00 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:23.986 05:28:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:23.986 05:28:00 -- nvmf/common.sh@521 -- # config=() 00:32:23.986 05:28:00 -- nvmf/common.sh@521 -- # local subsystem config 00:32:23.986 05:28:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:23.986 05:28:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:23.986 { 00:32:23.986 "params": { 00:32:23.986 "name": "Nvme$subsystem", 00:32:23.986 "trtype": "$TEST_TRANSPORT", 00:32:23.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.986 "adrfam": "ipv4", 00:32:23.986 "trsvcid": "$NVMF_PORT", 00:32:23.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.986 "hdgst": ${hdgst:-false}, 00:32:23.986 "ddgst": ${ddgst:-false} 00:32:23.986 }, 00:32:23.986 "method": "bdev_nvme_attach_controller" 00:32:23.986 } 00:32:23.986 EOF 00:32:23.986 )") 00:32:23.986 05:28:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.986 05:28:00 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.986 05:28:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:23.986 05:28:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:23.986 05:28:00 -- target/dif.sh@82 -- # gen_fio_conf 00:32:23.986 05:28:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:23.986 05:28:00 -- target/dif.sh@54 -- # local file 00:32:23.986 05:28:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.986 05:28:00 -- common/autotest_common.sh@1327 -- # shift 00:32:23.986 05:28:00 -- target/dif.sh@56 -- # cat 00:32:23.986 05:28:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:23.986 05:28:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.986 05:28:00 -- nvmf/common.sh@543 -- # cat 00:32:23.986 05:28:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.986 05:28:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:23.986 05:28:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:23.986 05:28:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:23.986 05:28:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.986 05:28:00 -- target/dif.sh@73 -- # cat 00:32:23.986 05:28:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:23.986 05:28:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:23.986 { 00:32:23.986 "params": { 00:32:23.986 "name": "Nvme$subsystem", 00:32:23.986 "trtype": "$TEST_TRANSPORT", 00:32:23.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.986 "adrfam": "ipv4", 00:32:23.986 "trsvcid": "$NVMF_PORT", 00:32:23.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.986 "hdgst": ${hdgst:-false}, 00:32:23.986 "ddgst": ${ddgst:-false} 00:32:23.986 }, 00:32:23.986 "method": "bdev_nvme_attach_controller" 00:32:23.986 } 00:32:23.986 EOF 00:32:23.986 )") 00:32:23.986 05:28:00 -- nvmf/common.sh@543 -- # cat 00:32:23.986 05:28:00 -- target/dif.sh@72 -- # (( file++ )) 00:32:23.986 05:28:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.986 05:28:00 -- target/dif.sh@73 -- # cat 00:32:23.986 05:28:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:23.987 05:28:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:23.987 { 00:32:23.987 "params": { 00:32:23.987 "name": "Nvme$subsystem", 00:32:23.987 "trtype": "$TEST_TRANSPORT", 00:32:23.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.987 "adrfam": "ipv4", 00:32:23.987 "trsvcid": "$NVMF_PORT", 00:32:23.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.987 "hdgst": ${hdgst:-false}, 00:32:23.987 "ddgst": ${ddgst:-false} 00:32:23.987 }, 00:32:23.987 "method": "bdev_nvme_attach_controller" 00:32:23.987 } 00:32:23.987 EOF 00:32:23.987 )") 00:32:23.987 05:28:00 -- target/dif.sh@72 -- # (( file++ )) 00:32:23.987 05:28:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.987 05:28:00 -- nvmf/common.sh@543 -- # cat 00:32:23.987 05:28:00 -- nvmf/common.sh@545 -- # jq . 00:32:23.987 05:28:00 -- nvmf/common.sh@546 -- # IFS=, 00:32:23.987 05:28:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:23.987 "params": { 00:32:23.987 "name": "Nvme0", 00:32:23.987 "trtype": "tcp", 00:32:23.987 "traddr": "10.0.0.2", 00:32:23.987 "adrfam": "ipv4", 00:32:23.987 "trsvcid": "4420", 00:32:23.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.987 "hdgst": false, 00:32:23.987 "ddgst": false 00:32:23.987 }, 00:32:23.987 "method": "bdev_nvme_attach_controller" 00:32:23.987 },{ 00:32:23.987 "params": { 00:32:23.987 "name": "Nvme1", 00:32:23.987 "trtype": "tcp", 00:32:23.987 "traddr": "10.0.0.2", 00:32:23.987 "adrfam": "ipv4", 00:32:23.987 "trsvcid": "4420", 00:32:23.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:23.987 "hdgst": false, 00:32:23.987 "ddgst": false 00:32:23.987 }, 00:32:23.987 "method": "bdev_nvme_attach_controller" 00:32:23.987 },{ 00:32:23.987 "params": { 00:32:23.987 "name": "Nvme2", 00:32:23.987 "trtype": "tcp", 00:32:23.987 "traddr": "10.0.0.2", 00:32:23.987 "adrfam": "ipv4", 00:32:23.987 "trsvcid": "4420", 00:32:23.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:23.987 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:23.987 "hdgst": false, 00:32:23.987 "ddgst": false 00:32:23.987 }, 00:32:23.987 "method": "bdev_nvme_attach_controller" 00:32:23.987 }' 00:32:23.987 05:28:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:23.987 05:28:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:23.987 05:28:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.987 05:28:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.987 05:28:00 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:23.987 05:28:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:23.987 05:28:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:23.987 05:28:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:23.987 05:28:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:23.987 05:28:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.987 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:23.987 ... 00:32:23.987 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:23.987 ... 00:32:23.987 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:23.987 ... 00:32:23.987 fio-3.35 00:32:23.987 Starting 24 threads 00:32:23.987 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.190 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035243: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:32:36.190 slat (usec): min=8, max=116, avg=35.37, stdev=19.03 00:32:36.190 clat (msec): min=23, max=223, avg=36.79, stdev=23.31 00:32:36.190 lat (msec): min=23, max=223, avg=36.82, stdev=23.31 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.190 | 99.99th=[ 224] 00:32:36.190 bw ( KiB/s): min= 256, max= 2032, per=4.15%, avg=1717.89, stdev=521.20, samples=19 00:32:36.190 iops : min= 64, max= 508, avg=429.47, stdev=130.30, samples=19 00:32:36.190 lat (msec) : 50=97.78%, 250=2.22% 00:32:36.190 cpu : usr=92.71%, sys=3.73%, ctx=278, majf=0, minf=33 00:32:36.190 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035244: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10025msec) 00:32:36.190 slat (usec): min=3, max=130, avg=34.38, stdev=28.27 00:32:36.190 clat (msec): min=28, max=208, avg=36.54, stdev=22.42 00:32:36.190 lat (msec): min=28, max=208, avg=36.57, stdev=22.43 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 209], 99.95th=[ 209], 00:32:36.190 | 99.99th=[ 209] 00:32:36.190 bw ( KiB/s): min= 384, max= 1920, per=4.19%, avg=1734.40, stdev=470.71, samples=20 00:32:36.190 iops : min= 96, max= 480, avg=433.60, stdev=117.68, samples=20 00:32:36.190 lat (msec) : 50=97.43%, 100=0.74%, 250=1.84% 00:32:36.190 cpu : usr=97.53%, sys=1.71%, ctx=21, majf=0, minf=31 00:32:36.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035245: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=431, BW=1727KiB/s (1769kB/s)(16.9MiB/10005msec) 00:32:36.190 slat (usec): min=3, max=127, avg=32.98, stdev=14.48 00:32:36.190 clat (msec): min=31, max=269, avg=36.77, stdev=22.95 00:32:36.190 lat (msec): min=32, max=269, avg=36.81, stdev=22.95 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 232], 00:32:36.190 | 99.99th=[ 271] 00:32:36.190 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1717.89, stdev=501.79, samples=19 00:32:36.190 iops : min= 64, max= 480, avg=429.47, stdev=125.45, samples=19 00:32:36.190 lat (msec) : 50=97.78%, 250=2.18%, 500=0.05% 00:32:36.190 cpu : usr=97.49%, sys=1.67%, ctx=43, majf=0, minf=15 00:32:36.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035246: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=431, BW=1728KiB/s (1769kB/s)(16.9MiB/10002msec) 00:32:36.190 slat (nsec): min=3569, max=62075, avg=29136.05, stdev=6628.88 00:32:36.190 clat (msec): min=23, max=284, avg=36.79, stdev=24.27 00:32:36.190 lat (msec): min=23, max=284, avg=36.81, stdev=24.27 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 284], 99.95th=[ 284], 00:32:36.190 | 99.99th=[ 284] 00:32:36.190 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1717.89, stdev=519.62, samples=19 00:32:36.190 iops : min= 64, max= 480, avg=429.47, stdev=129.90, samples=19 00:32:36.190 lat (msec) : 50=97.78%, 100=0.05%, 250=1.81%, 500=0.37% 00:32:36.190 cpu : usr=97.98%, sys=1.64%, ctx=21, majf=0, minf=20 00:32:36.190 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035247: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=431, BW=1726KiB/s (1768kB/s)(16.9MiB/10011msec) 00:32:36.190 slat (usec): min=7, max=187, avg=38.88, stdev=25.50 00:32:36.190 clat (msec): min=23, max=223, avg=36.78, stdev=23.30 00:32:36.190 lat (msec): min=23, max=223, avg=36.82, stdev=23.30 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.190 | 99.99th=[ 224] 00:32:36.190 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1727.20, stdev=507.52, samples=20 00:32:36.190 iops : min= 64, max= 480, avg=431.80, stdev=126.88, samples=20 00:32:36.190 lat (msec) : 50=97.78%, 250=2.22% 00:32:36.190 cpu : usr=94.71%, sys=2.81%, ctx=157, majf=0, minf=23 00:32:36.190 IO depths : 1=0.3%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035248: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10010msec) 00:32:36.190 slat (nsec): min=8099, max=83867, avg=28215.33, stdev=11668.92 00:32:36.190 clat (msec): min=10, max=273, avg=36.71, stdev=24.62 00:32:36.190 lat (msec): min=10, max=273, avg=36.74, stdev=24.62 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 275], 99.95th=[ 275], 00:32:36.190 | 99.99th=[ 275] 00:32:36.190 bw ( KiB/s): min= 256, max= 1936, per=4.17%, avg=1728.00, stdev=507.85, samples=20 00:32:36.190 iops : min= 64, max= 484, avg=432.00, stdev=126.96, samples=20 00:32:36.190 lat (msec) : 20=0.42%, 50=97.37%, 100=0.37%, 250=1.48%, 500=0.37% 00:32:36.190 cpu : usr=98.06%, sys=1.38%, ctx=62, majf=0, minf=20 00:32:36.190 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:36.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.190 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.190 filename0: (groupid=0, jobs=1): err= 0: pid=2035249: Wed Apr 24 05:28:11 2024 00:32:36.190 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10009msec) 00:32:36.190 slat (usec): min=9, max=123, avg=29.15, stdev=12.14 00:32:36.190 clat (msec): min=12, max=331, avg=36.66, stdev=24.70 00:32:36.190 lat (msec): min=12, max=331, avg=36.69, stdev=24.70 00:32:36.190 clat percentiles (msec): 00:32:36.190 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.190 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.190 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.190 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 271], 99.95th=[ 275], 00:32:36.190 | 99.99th=[ 334] 00:32:36.190 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1728.00, stdev=507.77, samples=20 00:32:36.190 iops : min= 64, max= 480, avg=432.00, stdev=126.94, samples=20 00:32:36.191 lat (msec) : 20=0.37%, 50=97.42%, 100=0.37%, 250=1.48%, 500=0.37% 00:32:36.191 cpu : usr=98.02%, sys=1.38%, ctx=37, majf=0, minf=25 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename0: (groupid=0, jobs=1): err= 0: pid=2035250: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:32:36.191 slat (usec): min=8, max=131, avg=42.50, stdev=19.90 00:32:36.191 clat (msec): min=23, max=224, avg=36.68, stdev=23.31 00:32:36.191 lat (msec): min=23, max=224, avg=36.72, stdev=23.31 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.191 | 99.99th=[ 224] 00:32:36.191 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1717.89, stdev=519.62, samples=19 00:32:36.191 iops : min= 64, max= 480, avg=429.47, stdev=129.90, samples=19 00:32:36.191 lat (msec) : 50=97.78%, 250=2.22% 00:32:36.191 cpu : usr=91.58%, sys=4.21%, ctx=81, majf=0, minf=20 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035251: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=432, BW=1732KiB/s (1773kB/s)(16.9MiB/10015msec) 00:32:36.191 slat (nsec): min=8145, max=64011, avg=28381.13, stdev=8101.36 00:32:36.191 clat (msec): min=18, max=223, avg=36.68, stdev=22.91 00:32:36.191 lat (msec): min=18, max=223, avg=36.71, stdev=22.91 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.191 | 99.99th=[ 224] 00:32:36.191 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1728.00, stdev=490.50, samples=20 00:32:36.191 iops : min= 64, max= 480, avg=432.00, stdev=122.62, samples=20 00:32:36.191 lat (msec) : 20=0.37%, 50=97.42%, 250=2.21% 00:32:36.191 cpu : usr=94.22%, sys=3.11%, ctx=107, majf=0, minf=20 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035252: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=431, BW=1728KiB/s (1769kB/s)(16.9MiB/10001msec) 00:32:36.191 slat (nsec): min=9330, max=89387, avg=31415.03, stdev=12995.91 00:32:36.191 clat (msec): min=21, max=278, avg=36.75, stdev=24.83 00:32:36.191 lat (msec): min=21, max=278, avg=36.78, stdev=24.83 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 279], 99.95th=[ 279], 00:32:36.191 | 99.99th=[ 279] 00:32:36.191 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1717.89, stdev=519.62, samples=19 00:32:36.191 iops : min= 64, max= 480, avg=429.47, stdev=129.90, samples=19 00:32:36.191 lat (msec) : 50=97.78%, 100=0.37%, 250=1.44%, 500=0.42% 00:32:36.191 cpu : usr=97.08%, sys=1.99%, ctx=30, majf=0, minf=17 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035253: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=434, BW=1737KiB/s (1779kB/s)(17.0MiB/10019msec) 00:32:36.191 slat (nsec): min=7991, max=96839, avg=23394.14, stdev=12583.78 00:32:36.191 clat (msec): min=32, max=208, avg=36.61, stdev=21.42 00:32:36.191 lat (msec): min=32, max=208, avg=36.64, stdev=21.42 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 209], 99.95th=[ 209], 00:32:36.191 | 99.99th=[ 209] 00:32:36.191 bw ( KiB/s): min= 256, max= 1920, per=4.19%, avg=1734.40, stdev=472.54, samples=20 00:32:36.191 iops : min= 64, max= 480, avg=433.60, stdev=118.14, samples=20 00:32:36.191 lat (msec) : 50=97.43%, 100=0.37%, 250=2.21% 00:32:36.191 cpu : usr=98.04%, sys=1.46%, ctx=21, majf=0, minf=25 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035254: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=432, BW=1729KiB/s (1771kB/s)(16.9MiB/10013msec) 00:32:36.191 slat (usec): min=8, max=191, avg=29.91, stdev=13.05 00:32:36.191 clat (msec): min=12, max=336, avg=36.72, stdev=25.04 00:32:36.191 lat (msec): min=12, max=336, avg=36.75, stdev=25.04 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 279], 99.95th=[ 279], 00:32:36.191 | 99.99th=[ 338] 00:32:36.191 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1728.00, stdev=507.77, samples=20 00:32:36.191 iops : min= 64, max= 480, avg=432.00, stdev=126.94, samples=20 00:32:36.191 lat (msec) : 20=0.21%, 50=97.62%, 100=0.32%, 250=1.43%, 500=0.42% 00:32:36.191 cpu : usr=95.52%, sys=2.69%, ctx=102, majf=0, minf=25 00:32:36.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035255: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:32:36.191 slat (usec): min=7, max=113, avg=33.34, stdev=13.23 00:32:36.191 clat (msec): min=23, max=288, avg=36.76, stdev=23.58 00:32:36.191 lat (msec): min=23, max=288, avg=36.79, stdev=23.58 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.191 | 99.99th=[ 288] 00:32:36.191 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1717.89, stdev=521.36, samples=19 00:32:36.191 iops : min= 64, max= 512, avg=429.47, stdev=130.34, samples=19 00:32:36.191 lat (msec) : 50=97.78%, 100=0.05%, 250=2.13%, 500=0.05% 00:32:36.191 cpu : usr=98.13%, sys=1.47%, ctx=14, majf=0, minf=29 00:32:36.191 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:36.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.191 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.191 filename1: (groupid=0, jobs=1): err= 0: pid=2035256: Wed Apr 24 05:28:11 2024 00:32:36.191 read: IOPS=436, BW=1745KiB/s (1787kB/s)(17.1MiB/10009msec) 00:32:36.191 slat (usec): min=7, max=672, avg=26.18, stdev=19.52 00:32:36.191 clat (msec): min=10, max=404, avg=36.49, stdev=26.60 00:32:36.191 lat (msec): min=10, max=404, avg=36.52, stdev=26.60 00:32:36.191 clat percentiles (msec): 00:32:36.191 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:36.191 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.191 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.191 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 334], 99.95th=[ 334], 00:32:36.191 | 99.99th=[ 405] 00:32:36.191 bw ( KiB/s): min= 128, max= 2048, per=4.20%, avg=1740.00, stdev=518.43, samples=20 00:32:36.191 iops : min= 32, max= 512, avg=435.00, stdev=129.61, samples=20 00:32:36.191 lat (msec) : 20=0.73%, 50=97.43%, 250=1.47%, 500=0.37% 00:32:36.192 cpu : usr=96.86%, sys=1.86%, ctx=51, majf=0, minf=28 00:32:36.192 IO depths : 1=3.1%, 2=6.4%, 4=13.4%, 8=65.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=91.8%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename1: (groupid=0, jobs=1): err= 0: pid=2035257: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10007msec) 00:32:36.192 slat (usec): min=4, max=118, avg=40.88, stdev=22.30 00:32:36.192 clat (msec): min=31, max=203, avg=36.71, stdev=22.85 00:32:36.192 lat (msec): min=31, max=203, avg=36.75, stdev=22.86 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 203], 00:32:36.192 | 99.99th=[ 203] 00:32:36.192 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1721.60, stdev=488.69, samples=20 00:32:36.192 iops : min= 64, max= 480, avg=430.40, stdev=122.17, samples=20 00:32:36.192 lat (msec) : 50=97.78%, 250=2.22% 00:32:36.192 cpu : usr=97.38%, sys=1.75%, ctx=125, majf=0, minf=18 00:32:36.192 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename1: (groupid=0, jobs=1): err= 0: pid=2035258: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:32:36.192 slat (nsec): min=7269, max=99895, avg=28552.65, stdev=13321.71 00:32:36.192 clat (msec): min=23, max=264, avg=36.82, stdev=23.48 00:32:36.192 lat (msec): min=23, max=264, avg=36.85, stdev=23.49 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 264], 00:32:36.192 | 99.99th=[ 266] 00:32:36.192 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1717.89, stdev=519.62, samples=19 00:32:36.192 iops : min= 64, max= 480, avg=429.47, stdev=129.90, samples=19 00:32:36.192 lat (msec) : 50=97.78%, 250=2.13%, 500=0.09% 00:32:36.192 cpu : usr=98.21%, sys=1.33%, ctx=39, majf=0, minf=24 00:32:36.192 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035259: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=431, BW=1726KiB/s (1768kB/s)(16.9MiB/10009msec) 00:32:36.192 slat (usec): min=4, max=113, avg=38.45, stdev=21.43 00:32:36.192 clat (msec): min=23, max=274, avg=36.73, stdev=23.12 00:32:36.192 lat (msec): min=23, max=274, avg=36.77, stdev=23.12 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 271], 00:32:36.192 | 99.99th=[ 275] 00:32:36.192 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1721.60, stdev=493.96, samples=20 00:32:36.192 iops : min= 64, max= 480, avg=430.40, stdev=123.49, samples=20 00:32:36.192 lat (msec) : 50=97.78%, 250=2.13%, 500=0.09% 00:32:36.192 cpu : usr=93.06%, sys=3.73%, ctx=178, majf=0, minf=19 00:32:36.192 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035260: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=432, BW=1732KiB/s (1773kB/s)(16.9MiB/10015msec) 00:32:36.192 slat (usec): min=8, max=124, avg=28.47, stdev= 8.14 00:32:36.192 clat (msec): min=18, max=265, avg=36.69, stdev=22.96 00:32:36.192 lat (msec): min=18, max=265, avg=36.72, stdev=22.96 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.192 | 99.99th=[ 266] 00:32:36.192 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1728.00, stdev=490.50, samples=20 00:32:36.192 iops : min= 64, max= 480, avg=432.00, stdev=122.62, samples=20 00:32:36.192 lat (msec) : 20=0.37%, 50=97.42%, 250=2.17%, 500=0.05% 00:32:36.192 cpu : usr=91.03%, sys=4.54%, ctx=229, majf=0, minf=18 00:32:36.192 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035261: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=437, BW=1751KiB/s (1793kB/s)(17.1MiB/10013msec) 00:32:36.192 slat (usec): min=3, max=119, avg=25.47, stdev=20.87 00:32:36.192 clat (msec): min=2, max=267, avg=36.32, stdev=21.64 00:32:36.192 lat (msec): min=2, max=267, avg=36.35, stdev=21.64 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 211], 00:32:36.192 | 99.99th=[ 268] 00:32:36.192 bw ( KiB/s): min= 368, max= 1920, per=4.22%, avg=1747.20, stdev=424.10, samples=20 00:32:36.192 iops : min= 92, max= 480, avg=436.80, stdev=106.03, samples=20 00:32:36.192 lat (msec) : 4=0.73%, 20=0.36%, 50=96.72%, 100=0.05%, 250=2.10% 00:32:36.192 lat (msec) : 500=0.05% 00:32:36.192 cpu : usr=97.39%, sys=1.85%, ctx=162, majf=0, minf=37 00:32:36.192 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035262: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10049msec) 00:32:36.192 slat (usec): min=7, max=139, avg=35.74, stdev=26.74 00:32:36.192 clat (msec): min=15, max=332, avg=36.82, stdev=26.43 00:32:36.192 lat (msec): min=15, max=332, avg=36.86, stdev=26.43 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 334], 99.95th=[ 334], 00:32:36.192 | 99.99th=[ 334] 00:32:36.192 bw ( KiB/s): min= 128, max= 2000, per=4.18%, avg=1730.40, stdev=516.15, samples=20 00:32:36.192 iops : min= 32, max= 500, avg=432.60, stdev=129.04, samples=20 00:32:36.192 lat (msec) : 20=0.23%, 50=97.65%, 100=0.28%, 250=1.43%, 500=0.41% 00:32:36.192 cpu : usr=91.87%, sys=4.14%, ctx=235, majf=0, minf=26 00:32:36.192 IO depths : 1=0.1%, 2=1.8%, 4=6.9%, 8=74.4%, 16=16.9%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=90.8%, 8=7.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035263: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=432, BW=1732KiB/s (1773kB/s)(16.9MiB/10010msec) 00:32:36.192 slat (nsec): min=5590, max=94822, avg=28507.70, stdev=11205.42 00:32:36.192 clat (msec): min=12, max=333, avg=36.73, stdev=24.84 00:32:36.192 lat (msec): min=12, max=333, avg=36.76, stdev=24.84 00:32:36.192 clat percentiles (msec): 00:32:36.192 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.192 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.192 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.192 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 275], 99.95th=[ 275], 00:32:36.192 | 99.99th=[ 334] 00:32:36.192 bw ( KiB/s): min= 240, max= 1936, per=4.17%, avg=1728.00, stdev=507.88, samples=20 00:32:36.192 iops : min= 60, max= 484, avg=432.00, stdev=126.97, samples=20 00:32:36.192 lat (msec) : 20=0.37%, 50=97.46%, 100=0.32%, 250=1.48%, 500=0.37% 00:32:36.192 cpu : usr=96.85%, sys=2.00%, ctx=24, majf=0, minf=24 00:32:36.192 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:36.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.192 issued rwts: total=4334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.192 filename2: (groupid=0, jobs=1): err= 0: pid=2035264: Wed Apr 24 05:28:11 2024 00:32:36.192 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:32:36.193 slat (usec): min=10, max=116, avg=33.98, stdev=18.28 00:32:36.193 clat (msec): min=23, max=262, avg=36.77, stdev=23.39 00:32:36.193 lat (msec): min=23, max=262, avg=36.80, stdev=23.39 00:32:36.193 clat percentiles (msec): 00:32:36.193 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.193 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.193 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.193 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 224], 99.95th=[ 224], 00:32:36.193 | 99.99th=[ 262] 00:32:36.193 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1717.89, stdev=521.36, samples=19 00:32:36.193 iops : min= 64, max= 512, avg=429.47, stdev=130.34, samples=19 00:32:36.193 lat (msec) : 50=97.78%, 250=2.18%, 500=0.05% 00:32:36.193 cpu : usr=97.08%, sys=2.09%, ctx=207, majf=0, minf=28 00:32:36.193 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.193 filename2: (groupid=0, jobs=1): err= 0: pid=2035265: Wed Apr 24 05:28:11 2024 00:32:36.193 read: IOPS=433, BW=1732KiB/s (1774kB/s)(16.9MiB/10013msec) 00:32:36.193 slat (nsec): min=7924, max=91864, avg=25646.12, stdev=11875.27 00:32:36.193 clat (msec): min=12, max=277, avg=36.73, stdev=24.79 00:32:36.193 lat (msec): min=12, max=277, avg=36.75, stdev=24.79 00:32:36.193 clat percentiles (msec): 00:32:36.193 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:36.193 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.193 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.193 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 279], 99.95th=[ 279], 00:32:36.193 | 99.99th=[ 279] 00:32:36.193 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1728.00, stdev=507.77, samples=20 00:32:36.193 iops : min= 64, max= 480, avg=432.00, stdev=126.94, samples=20 00:32:36.193 lat (msec) : 20=0.37%, 50=97.42%, 100=0.37%, 250=1.43%, 500=0.42% 00:32:36.193 cpu : usr=97.51%, sys=1.69%, ctx=140, majf=0, minf=21 00:32:36.193 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.193 filename2: (groupid=0, jobs=1): err= 0: pid=2035266: Wed Apr 24 05:28:11 2024 00:32:36.193 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10009msec) 00:32:36.193 slat (usec): min=7, max=107, avg=31.41, stdev=11.65 00:32:36.193 clat (msec): min=14, max=272, avg=36.63, stdev=24.61 00:32:36.193 lat (msec): min=14, max=272, avg=36.66, stdev=24.60 00:32:36.193 clat percentiles (msec): 00:32:36.193 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:36.193 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:36.193 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:36.193 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 271], 99.95th=[ 271], 00:32:36.193 | 99.99th=[ 271] 00:32:36.193 bw ( KiB/s): min= 256, max= 1976, per=4.18%, avg=1730.80, stdev=509.04, samples=20 00:32:36.193 iops : min= 64, max= 494, avg=432.70, stdev=127.26, samples=20 00:32:36.193 lat (msec) : 20=0.37%, 50=97.79%, 250=1.43%, 500=0.42% 00:32:36.193 cpu : usr=97.87%, sys=1.54%, ctx=98, majf=0, minf=22 00:32:36.193 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:36.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.193 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:36.193 00:32:36.193 Run status group 0 (all jobs): 00:32:36.193 READ: bw=40.4MiB/s (42.4MB/s), 1726KiB/s-1751KiB/s (1768kB/s-1793kB/s), io=406MiB (426MB), run=10001-10049msec 00:32:36.193 05:28:12 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:36.193 05:28:12 -- target/dif.sh@43 -- # local sub 00:32:36.193 05:28:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:36.193 05:28:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:36.193 05:28:12 -- target/dif.sh@36 -- # local sub_id=0 00:32:36.193 05:28:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:36.193 05:28:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:36.193 05:28:12 -- target/dif.sh@36 -- # local sub_id=1 00:32:36.193 05:28:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:36.193 05:28:12 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:36.193 05:28:12 -- target/dif.sh@36 -- # local sub_id=2 00:32:36.193 05:28:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # numjobs=2 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # iodepth=8 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # runtime=5 00:32:36.193 05:28:12 -- target/dif.sh@115 -- # files=1 00:32:36.193 05:28:12 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:36.193 05:28:12 -- target/dif.sh@28 -- # local sub 00:32:36.193 05:28:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:36.193 05:28:12 -- target/dif.sh@31 -- # create_subsystem 0 00:32:36.193 05:28:12 -- target/dif.sh@18 -- # local sub_id=0 00:32:36.193 05:28:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 bdev_null0 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 [2024-04-24 05:28:12.119785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:36.193 05:28:12 -- target/dif.sh@31 -- # create_subsystem 1 00:32:36.193 05:28:12 -- target/dif.sh@18 -- # local sub_id=1 00:32:36.193 05:28:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 bdev_null1 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.193 05:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.193 05:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 05:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.193 05:28:12 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:36.193 05:28:12 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:36.193 05:28:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:36.193 05:28:12 -- nvmf/common.sh@521 -- # config=() 00:32:36.193 05:28:12 -- nvmf/common.sh@521 -- # local subsystem config 00:32:36.193 05:28:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:36.193 05:28:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:36.193 { 00:32:36.193 "params": { 00:32:36.193 "name": "Nvme$subsystem", 00:32:36.194 "trtype": "$TEST_TRANSPORT", 00:32:36.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:36.194 "adrfam": "ipv4", 00:32:36.194 "trsvcid": "$NVMF_PORT", 00:32:36.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:36.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:36.194 "hdgst": ${hdgst:-false}, 00:32:36.194 "ddgst": ${ddgst:-false} 00:32:36.194 }, 00:32:36.194 "method": "bdev_nvme_attach_controller" 00:32:36.194 } 00:32:36.194 EOF 00:32:36.194 )") 00:32:36.194 05:28:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.194 05:28:12 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.194 05:28:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:36.194 05:28:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:36.194 05:28:12 -- target/dif.sh@82 -- # gen_fio_conf 00:32:36.194 05:28:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:36.194 05:28:12 -- target/dif.sh@54 -- # local file 00:32:36.194 05:28:12 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.194 05:28:12 -- common/autotest_common.sh@1327 -- # shift 00:32:36.194 05:28:12 -- target/dif.sh@56 -- # cat 00:32:36.194 05:28:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:36.194 05:28:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.194 05:28:12 -- nvmf/common.sh@543 -- # cat 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:36.194 05:28:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:36.194 05:28:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:36.194 05:28:12 -- target/dif.sh@73 -- # cat 00:32:36.194 05:28:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:36.194 05:28:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:36.194 { 00:32:36.194 "params": { 00:32:36.194 "name": "Nvme$subsystem", 00:32:36.194 "trtype": "$TEST_TRANSPORT", 00:32:36.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:36.194 "adrfam": "ipv4", 00:32:36.194 "trsvcid": "$NVMF_PORT", 00:32:36.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:36.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:36.194 "hdgst": ${hdgst:-false}, 00:32:36.194 "ddgst": ${ddgst:-false} 00:32:36.194 }, 00:32:36.194 "method": "bdev_nvme_attach_controller" 00:32:36.194 } 00:32:36.194 EOF 00:32:36.194 )") 00:32:36.194 05:28:12 -- nvmf/common.sh@543 -- # cat 00:32:36.194 05:28:12 -- target/dif.sh@72 -- # (( file++ )) 00:32:36.194 05:28:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:36.194 05:28:12 -- nvmf/common.sh@545 -- # jq . 00:32:36.194 05:28:12 -- nvmf/common.sh@546 -- # IFS=, 00:32:36.194 05:28:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:36.194 "params": { 00:32:36.194 "name": "Nvme0", 00:32:36.194 "trtype": "tcp", 00:32:36.194 "traddr": "10.0.0.2", 00:32:36.194 "adrfam": "ipv4", 00:32:36.194 "trsvcid": "4420", 00:32:36.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:36.194 "hdgst": false, 00:32:36.194 "ddgst": false 00:32:36.194 }, 00:32:36.194 "method": "bdev_nvme_attach_controller" 00:32:36.194 },{ 00:32:36.194 "params": { 00:32:36.194 "name": "Nvme1", 00:32:36.194 "trtype": "tcp", 00:32:36.194 "traddr": "10.0.0.2", 00:32:36.194 "adrfam": "ipv4", 00:32:36.194 "trsvcid": "4420", 00:32:36.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:36.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:36.194 "hdgst": false, 00:32:36.194 "ddgst": false 00:32:36.194 }, 00:32:36.194 "method": "bdev_nvme_attach_controller" 00:32:36.194 }' 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:36.194 05:28:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:36.194 05:28:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:36.194 05:28:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:36.194 05:28:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:36.194 05:28:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:36.194 05:28:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.194 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:36.194 ... 00:32:36.194 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:36.194 ... 00:32:36.194 fio-3.35 00:32:36.194 Starting 4 threads 00:32:36.194 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.491 00:32:41.491 filename0: (groupid=0, jobs=1): err= 0: pid=2036650: Wed Apr 24 05:28:18 2024 00:32:41.491 read: IOPS=1804, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5001msec) 00:32:41.491 slat (nsec): min=3977, max=64010, avg=16101.34, stdev=9275.99 00:32:41.491 clat (usec): min=884, max=8294, avg=4381.41, stdev=683.46 00:32:41.491 lat (usec): min=901, max=8308, avg=4397.51, stdev=683.06 00:32:41.491 clat percentiles (usec): 00:32:41.491 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 3982], 00:32:41.491 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:32:41.491 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5800], 00:32:41.491 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 7963], 00:32:41.491 | 99.99th=[ 8291] 00:32:41.491 bw ( KiB/s): min=13456, max=15120, per=24.79%, avg=14465.78, stdev=589.26, samples=9 00:32:41.491 iops : min= 1682, max= 1890, avg=1808.22, stdev=73.66, samples=9 00:32:41.491 lat (usec) : 1000=0.04% 00:32:41.491 lat (msec) : 2=0.17%, 4=20.76%, 10=79.02% 00:32:41.491 cpu : usr=94.38%, sys=5.14%, ctx=11, majf=0, minf=77 00:32:41.491 IO depths : 1=0.1%, 2=6.8%, 4=65.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.491 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.491 issued rwts: total=9025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.491 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:41.491 filename0: (groupid=0, jobs=1): err= 0: pid=2036651: Wed Apr 24 05:28:18 2024 00:32:41.492 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:32:41.492 slat (nsec): min=3917, max=60108, avg=18284.19, stdev=8259.67 00:32:41.492 clat (usec): min=857, max=9571, avg=4314.89, stdev=690.47 00:32:41.492 lat (usec): min=887, max=9583, avg=4333.17, stdev=690.13 00:32:41.492 clat percentiles (usec): 00:32:41.492 | 1.00th=[ 2868], 5.00th=[ 3359], 10.00th=[ 3687], 20.00th=[ 3916], 00:32:41.492 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:41.492 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5604], 00:32:41.492 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8291], 99.95th=[ 9241], 00:32:41.492 | 99.99th=[ 9634] 00:32:41.492 bw ( KiB/s): min=13354, max=15456, per=25.06%, avg=14623.40, stdev=667.36, samples=10 00:32:41.492 iops : min= 1669, max= 1932, avg=1827.90, stdev=83.47, samples=10 00:32:41.492 lat (usec) : 1000=0.02% 00:32:41.492 lat (msec) : 2=0.20%, 4=24.11%, 10=75.67% 00:32:41.492 cpu : usr=94.26%, sys=5.14%, ctx=17, majf=0, minf=38 00:32:41.492 IO depths : 1=0.3%, 2=8.4%, 4=64.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.492 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:41.492 filename1: (groupid=0, jobs=1): err= 0: pid=2036652: Wed Apr 24 05:28:18 2024 00:32:41.492 read: IOPS=1822, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5001msec) 00:32:41.492 slat (nsec): min=3793, max=62557, avg=15364.86, stdev=9224.43 00:32:41.492 clat (usec): min=720, max=9837, avg=4337.98, stdev=619.81 00:32:41.492 lat (usec): min=737, max=9849, avg=4353.35, stdev=619.89 00:32:41.492 clat percentiles (usec): 00:32:41.492 | 1.00th=[ 2868], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3982], 00:32:41.492 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:32:41.492 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5473], 00:32:41.492 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7439], 99.95th=[ 7701], 00:32:41.492 | 99.99th=[ 9896] 00:32:41.492 bw ( KiB/s): min=13632, max=15072, per=25.04%, avg=14614.67, stdev=486.11, samples=9 00:32:41.492 iops : min= 1704, max= 1884, avg=1826.78, stdev=60.83, samples=9 00:32:41.492 lat (usec) : 750=0.01%, 1000=0.01% 00:32:41.492 lat (msec) : 2=0.24%, 4=20.91%, 10=78.83% 00:32:41.492 cpu : usr=94.28%, sys=5.26%, ctx=12, majf=0, minf=42 00:32:41.492 IO depths : 1=0.3%, 2=9.4%, 4=62.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 issued rwts: total=9115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.492 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:41.492 filename1: (groupid=0, jobs=1): err= 0: pid=2036653: Wed Apr 24 05:28:18 2024 00:32:41.492 read: IOPS=1840, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5003msec) 00:32:41.492 slat (nsec): min=3847, max=64362, avg=14247.89, stdev=8489.29 00:32:41.492 clat (usec): min=979, max=10164, avg=4300.43, stdev=749.94 00:32:41.492 lat (usec): min=997, max=10187, avg=4314.68, stdev=750.21 00:32:41.492 clat percentiles (usec): 00:32:41.492 | 1.00th=[ 2704], 5.00th=[ 3228], 10.00th=[ 3523], 20.00th=[ 3851], 00:32:41.492 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:41.492 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5800], 00:32:41.492 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[10159], 00:32:41.492 | 99.99th=[10159] 00:32:41.492 bw ( KiB/s): min=13744, max=15664, per=25.22%, avg=14716.80, stdev=603.67, samples=10 00:32:41.492 iops : min= 1718, max= 1958, avg=1839.60, stdev=75.46, samples=10 00:32:41.492 lat (usec) : 1000=0.01% 00:32:41.492 lat (msec) : 2=0.32%, 4=27.16%, 10=72.43%, 20=0.09% 00:32:41.492 cpu : usr=94.34%, sys=5.18%, ctx=8, majf=0, minf=15 00:32:41.492 IO depths : 1=0.1%, 2=8.8%, 4=63.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.492 issued rwts: total=9206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.492 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:41.492 00:32:41.492 Run status group 0 (all jobs): 00:32:41.492 READ: bw=57.0MiB/s (59.8MB/s), 14.1MiB/s-14.4MiB/s (14.8MB/s-15.1MB/s), io=285MiB (299MB), run=5001-5003msec 00:32:41.492 05:28:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:41.492 05:28:18 -- target/dif.sh@43 -- # local sub 00:32:41.492 05:28:18 -- target/dif.sh@45 -- # for sub in "$@" 00:32:41.492 05:28:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:41.492 05:28:18 -- target/dif.sh@36 -- # local sub_id=0 00:32:41.492 05:28:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:41.492 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.492 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.492 05:28:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:41.492 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.492 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.492 05:28:18 -- target/dif.sh@45 -- # for sub in "$@" 00:32:41.492 05:28:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:41.492 05:28:18 -- target/dif.sh@36 -- # local sub_id=1 00:32:41.492 05:28:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:41.492 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.492 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.492 05:28:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:41.492 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.492 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.492 00:32:41.492 real 0m24.472s 00:32:41.492 user 4m28.058s 00:32:41.492 sys 0m8.926s 00:32:41.492 05:28:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.492 ************************************ 00:32:41.492 END TEST fio_dif_rand_params 00:32:41.492 ************************************ 00:32:41.492 05:28:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:41.492 05:28:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:41.492 05:28:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:41.492 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.753 ************************************ 00:32:41.753 START TEST fio_dif_digest 00:32:41.753 ************************************ 00:32:41.753 05:28:18 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:32:41.753 05:28:18 -- target/dif.sh@123 -- # local NULL_DIF 00:32:41.753 05:28:18 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:41.753 05:28:18 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:41.753 05:28:18 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:41.753 05:28:18 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:41.753 05:28:18 -- target/dif.sh@127 -- # numjobs=3 00:32:41.753 05:28:18 -- target/dif.sh@127 -- # iodepth=3 00:32:41.753 05:28:18 -- target/dif.sh@127 -- # runtime=10 00:32:41.753 05:28:18 -- target/dif.sh@128 -- # hdgst=true 00:32:41.753 05:28:18 -- target/dif.sh@128 -- # ddgst=true 00:32:41.753 05:28:18 -- target/dif.sh@130 -- # create_subsystems 0 00:32:41.753 05:28:18 -- target/dif.sh@28 -- # local sub 00:32:41.753 05:28:18 -- target/dif.sh@30 -- # for sub in "$@" 00:32:41.753 05:28:18 -- target/dif.sh@31 -- # create_subsystem 0 00:32:41.753 05:28:18 -- target/dif.sh@18 -- # local sub_id=0 00:32:41.753 05:28:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:41.753 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.753 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.753 bdev_null0 00:32:41.753 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.753 05:28:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:41.753 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.753 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.753 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.753 05:28:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:41.753 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.753 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.753 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.753 05:28:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:41.753 05:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.753 05:28:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.753 [2024-04-24 05:28:18.780800] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.753 05:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.753 05:28:18 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:41.753 05:28:18 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:41.753 05:28:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:41.753 05:28:18 -- nvmf/common.sh@521 -- # config=() 00:32:41.753 05:28:18 -- nvmf/common.sh@521 -- # local subsystem config 00:32:41.753 05:28:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:41.753 05:28:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:41.753 { 00:32:41.753 "params": { 00:32:41.753 "name": "Nvme$subsystem", 00:32:41.753 "trtype": "$TEST_TRANSPORT", 00:32:41.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.753 "adrfam": "ipv4", 00:32:41.753 "trsvcid": "$NVMF_PORT", 00:32:41.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.753 "hdgst": ${hdgst:-false}, 00:32:41.753 "ddgst": ${ddgst:-false} 00:32:41.753 }, 00:32:41.753 "method": "bdev_nvme_attach_controller" 00:32:41.753 } 00:32:41.753 EOF 00:32:41.753 )") 00:32:41.754 05:28:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.754 05:28:18 -- target/dif.sh@82 -- # gen_fio_conf 00:32:41.754 05:28:18 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.754 05:28:18 -- target/dif.sh@54 -- # local file 00:32:41.754 05:28:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:41.754 05:28:18 -- target/dif.sh@56 -- # cat 00:32:41.754 05:28:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.754 05:28:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:41.754 05:28:18 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.754 05:28:18 -- common/autotest_common.sh@1327 -- # shift 00:32:41.754 05:28:18 -- nvmf/common.sh@543 -- # cat 00:32:41.754 05:28:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:41.754 05:28:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.754 05:28:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.754 05:28:18 -- target/dif.sh@72 -- # (( file <= files )) 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:41.754 05:28:18 -- nvmf/common.sh@545 -- # jq . 00:32:41.754 05:28:18 -- nvmf/common.sh@546 -- # IFS=, 00:32:41.754 05:28:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:41.754 "params": { 00:32:41.754 "name": "Nvme0", 00:32:41.754 "trtype": "tcp", 00:32:41.754 "traddr": "10.0.0.2", 00:32:41.754 "adrfam": "ipv4", 00:32:41.754 "trsvcid": "4420", 00:32:41.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:41.754 "hdgst": true, 00:32:41.754 "ddgst": true 00:32:41.754 }, 00:32:41.754 "method": "bdev_nvme_attach_controller" 00:32:41.754 }' 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:41.754 05:28:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:41.754 05:28:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:41.754 05:28:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:41.754 05:28:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:41.754 05:28:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:41.754 05:28:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:42.011 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:42.011 ... 00:32:42.011 fio-3.35 00:32:42.011 Starting 3 threads 00:32:42.011 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.294 00:32:54.294 filename0: (groupid=0, jobs=1): err= 0: pid=2037414: Wed Apr 24 05:28:29 2024 00:32:54.294 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(255MiB/10045msec) 00:32:54.294 slat (nsec): min=4177, max=45465, avg=15834.75, stdev=5199.75 00:32:54.294 clat (usec): min=8998, max=56147, avg=14709.58, stdev=2784.34 00:32:54.294 lat (usec): min=9012, max=56159, avg=14725.42, stdev=2784.91 00:32:54.294 clat percentiles (usec): 00:32:54.294 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11207], 20.00th=[12911], 00:32:54.294 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401], 00:32:54.294 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:32:54.294 | 99.00th=[18482], 99.50th=[19530], 99.90th=[55313], 99.95th=[56361], 00:32:54.294 | 99.99th=[56361] 00:32:54.294 bw ( KiB/s): min=23808, max=29184, per=35.95%, avg=26124.80, stdev=1368.49, samples=20 00:32:54.294 iops : min= 186, max= 228, avg=204.10, stdev=10.69, samples=20 00:32:54.294 lat (msec) : 10=1.32%, 20=98.43%, 50=0.05%, 100=0.20% 00:32:54.294 cpu : usr=93.20%, sys=6.33%, ctx=23, majf=0, minf=161 00:32:54.294 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:54.294 filename0: (groupid=0, jobs=1): err= 0: pid=2037415: Wed Apr 24 05:28:29 2024 00:32:54.294 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10006msec) 00:32:54.294 slat (nsec): min=5032, max=71845, avg=15555.51, stdev=5203.66 00:32:54.294 clat (usec): min=5967, max=60906, avg=15140.45, stdev=3715.53 00:32:54.294 lat (usec): min=5979, max=60919, avg=15156.01, stdev=3715.92 00:32:54.294 clat percentiles (usec): 00:32:54.294 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[11076], 20.00th=[13304], 00:32:54.294 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:32:54.294 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:32:54.294 | 99.00th=[19268], 99.50th=[20317], 99.90th=[60031], 99.95th=[61080], 00:32:54.294 | 99.99th=[61080] 00:32:54.294 bw ( KiB/s): min=22016, max=28416, per=34.82%, avg=25307.95, stdev=1582.53, samples=20 00:32:54.294 iops : min= 172, max= 222, avg=197.70, stdev=12.38, samples=20 00:32:54.294 lat (msec) : 10=3.13%, 20=96.21%, 50=0.20%, 100=0.45% 00:32:54.294 cpu : usr=93.20%, sys=6.35%, ctx=20, majf=0, minf=132 00:32:54.294 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:54.294 filename0: (groupid=0, jobs=1): err= 0: pid=2037416: Wed Apr 24 05:28:29 2024 00:32:54.294 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(210MiB/10041msec) 00:32:54.294 slat (nsec): min=4561, max=55576, avg=20540.04, stdev=5258.96 00:32:54.294 clat (usec): min=9525, max=60229, avg=17902.61, stdev=9413.22 00:32:54.294 lat (usec): min=9544, max=60252, avg=17923.15, stdev=9413.19 00:32:54.294 clat percentiles (usec): 00:32:54.294 | 1.00th=[10683], 5.00th=[13304], 10.00th=[13960], 20.00th=[14615], 00:32:54.294 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:32:54.294 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17957], 95.00th=[54264], 00:32:54.294 | 99.00th=[57934], 99.50th=[58459], 99.90th=[59507], 99.95th=[60031], 00:32:54.294 | 99.99th=[60031] 00:32:54.294 bw ( KiB/s): min=18176, max=25088, per=29.54%, avg=21465.60, stdev=1997.31, samples=20 00:32:54.294 iops : min= 142, max= 196, avg=167.70, stdev=15.60, samples=20 00:32:54.294 lat (msec) : 10=0.36%, 20=93.87%, 50=0.36%, 100=5.42% 00:32:54.294 cpu : usr=94.52%, sys=5.00%, ctx=23, majf=0, minf=130 00:32:54.294 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.294 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:54.294 00:32:54.294 Run status group 0 (all jobs): 00:32:54.294 READ: bw=71.0MiB/s (74.4MB/s), 20.9MiB/s-25.4MiB/s (21.9MB/s-26.7MB/s), io=713MiB (748MB), run=10006-10045msec 00:32:54.294 05:28:29 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:54.294 05:28:29 -- target/dif.sh@43 -- # local sub 00:32:54.294 05:28:29 -- target/dif.sh@45 -- # for sub in "$@" 00:32:54.294 05:28:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:54.294 05:28:29 -- target/dif.sh@36 -- # local sub_id=0 00:32:54.294 05:28:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.294 05:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.294 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:32:54.294 05:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.294 05:28:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:54.294 05:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.294 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:32:54.294 05:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.294 00:32:54.294 real 0m11.098s 00:32:54.294 user 0m29.302s 00:32:54.294 sys 0m2.027s 00:32:54.294 05:28:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:54.294 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:32:54.294 ************************************ 00:32:54.294 END TEST fio_dif_digest 00:32:54.294 ************************************ 00:32:54.294 05:28:29 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:54.294 05:28:29 -- target/dif.sh@147 -- # nvmftestfini 00:32:54.294 05:28:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:54.294 05:28:29 -- nvmf/common.sh@117 -- # sync 00:32:54.294 05:28:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:54.294 05:28:29 -- nvmf/common.sh@120 -- # set +e 00:32:54.294 05:28:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:54.294 05:28:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:54.294 rmmod nvme_tcp 00:32:54.294 rmmod nvme_fabrics 00:32:54.294 rmmod nvme_keyring 00:32:54.294 05:28:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:54.294 05:28:29 -- nvmf/common.sh@124 -- # set -e 00:32:54.294 05:28:29 -- nvmf/common.sh@125 -- # return 0 00:32:54.294 05:28:29 -- nvmf/common.sh@478 -- # '[' -n 2031329 ']' 00:32:54.294 05:28:29 -- nvmf/common.sh@479 -- # killprocess 2031329 00:32:54.294 05:28:29 -- common/autotest_common.sh@936 -- # '[' -z 2031329 ']' 00:32:54.294 05:28:29 -- common/autotest_common.sh@940 -- # kill -0 2031329 00:32:54.294 05:28:29 -- common/autotest_common.sh@941 -- # uname 00:32:54.294 05:28:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:54.294 05:28:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2031329 00:32:54.294 05:28:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:54.294 05:28:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:54.295 05:28:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2031329' 00:32:54.295 killing process with pid 2031329 00:32:54.295 05:28:29 -- common/autotest_common.sh@955 -- # kill 2031329 00:32:54.295 05:28:29 -- common/autotest_common.sh@960 -- # wait 2031329 00:32:54.295 05:28:30 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:54.295 05:28:30 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:54.295 Waiting for block devices as requested 00:32:54.295 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:54.295 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.295 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:54.552 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:54.552 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:54.552 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:54.552 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:54.810 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:54.810 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:54.810 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.810 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:55.069 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:55.069 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:55.069 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:55.069 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:55.069 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:55.337 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:55.337 05:28:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:55.337 05:28:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:55.337 05:28:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:55.337 05:28:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:55.337 05:28:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.337 05:28:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:55.338 05:28:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.873 05:28:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:57.873 00:32:57.873 real 1m6.905s 00:32:57.873 user 6m25.079s 00:32:57.873 sys 0m20.378s 00:32:57.873 05:28:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:57.873 05:28:34 -- common/autotest_common.sh@10 -- # set +x 00:32:57.873 ************************************ 00:32:57.873 END TEST nvmf_dif 00:32:57.873 ************************************ 00:32:57.873 05:28:34 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:57.873 05:28:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:57.873 05:28:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:57.873 05:28:34 -- common/autotest_common.sh@10 -- # set +x 00:32:57.873 ************************************ 00:32:57.873 START TEST nvmf_abort_qd_sizes 00:32:57.873 ************************************ 00:32:57.873 05:28:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:57.873 * Looking for test storage... 00:32:57.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.873 05:28:34 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.873 05:28:34 -- nvmf/common.sh@7 -- # uname -s 00:32:57.873 05:28:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.873 05:28:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.873 05:28:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.873 05:28:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.873 05:28:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.873 05:28:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.873 05:28:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.873 05:28:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.873 05:28:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.873 05:28:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.873 05:28:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.873 05:28:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.873 05:28:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.873 05:28:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.873 05:28:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.873 05:28:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.873 05:28:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.873 05:28:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.873 05:28:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.873 05:28:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.873 05:28:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.873 05:28:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.873 05:28:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.873 05:28:34 -- paths/export.sh@5 -- # export PATH 00:32:57.873 05:28:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.873 05:28:34 -- nvmf/common.sh@47 -- # : 0 00:32:57.873 05:28:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:57.873 05:28:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:57.873 05:28:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.873 05:28:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.873 05:28:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.873 05:28:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:57.873 05:28:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:57.873 05:28:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:57.873 05:28:34 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:57.873 05:28:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:57.873 05:28:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.873 05:28:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:57.873 05:28:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:57.873 05:28:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:57.873 05:28:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.873 05:28:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:57.873 05:28:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.873 05:28:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:57.873 05:28:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:57.873 05:28:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:57.873 05:28:34 -- common/autotest_common.sh@10 -- # set +x 00:32:59.776 05:28:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:59.776 05:28:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:59.776 05:28:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:59.776 05:28:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:59.776 05:28:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:59.776 05:28:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:59.776 05:28:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:59.776 05:28:36 -- nvmf/common.sh@295 -- # net_devs=() 00:32:59.776 05:28:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:59.776 05:28:36 -- nvmf/common.sh@296 -- # e810=() 00:32:59.776 05:28:36 -- nvmf/common.sh@296 -- # local -ga e810 00:32:59.776 05:28:36 -- nvmf/common.sh@297 -- # x722=() 00:32:59.776 05:28:36 -- nvmf/common.sh@297 -- # local -ga x722 00:32:59.776 05:28:36 -- nvmf/common.sh@298 -- # mlx=() 00:32:59.776 05:28:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:59.776 05:28:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.776 05:28:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:59.776 05:28:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:59.776 05:28:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:59.776 05:28:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.776 05:28:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:59.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:59.776 05:28:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.776 05:28:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.777 05:28:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:59.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:59.777 05:28:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:59.777 05:28:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.777 05:28:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.777 05:28:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:59.777 05:28:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.777 05:28:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:59.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:59.777 05:28:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.777 05:28:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.777 05:28:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.777 05:28:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:59.777 05:28:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.777 05:28:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:59.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:59.777 05:28:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.777 05:28:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:59.777 05:28:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:59.777 05:28:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:59.777 05:28:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:59.777 05:28:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.777 05:28:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.777 05:28:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.777 05:28:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:59.777 05:28:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.777 05:28:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.777 05:28:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:59.777 05:28:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.777 05:28:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.777 05:28:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:59.777 05:28:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:59.777 05:28:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.777 05:28:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.777 05:28:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.777 05:28:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.777 05:28:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:59.777 05:28:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.777 05:28:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.777 05:28:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.777 05:28:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:59.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:32:59.777 00:32:59.777 --- 10.0.0.2 ping statistics --- 00:32:59.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.777 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:32:59.777 05:28:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:32:59.777 00:32:59.777 --- 10.0.0.1 ping statistics --- 00:32:59.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.777 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:59.777 05:28:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.777 05:28:36 -- nvmf/common.sh@411 -- # return 0 00:32:59.777 05:28:36 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:59.777 05:28:36 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.713 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:00.713 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:00.713 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:00.713 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:00.713 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:00.973 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:00.973 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:00.973 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:00.973 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:00.973 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:01.912 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:01.912 05:28:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.912 05:28:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:01.912 05:28:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:01.912 05:28:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.912 05:28:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:01.912 05:28:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:01.912 05:28:39 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:01.912 05:28:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:01.912 05:28:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:01.912 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:33:01.912 05:28:39 -- nvmf/common.sh@470 -- # nvmfpid=2042319 00:33:01.912 05:28:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:01.912 05:28:39 -- nvmf/common.sh@471 -- # waitforlisten 2042319 00:33:01.912 05:28:39 -- common/autotest_common.sh@817 -- # '[' -z 2042319 ']' 00:33:01.912 05:28:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.912 05:28:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:01.912 05:28:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.912 05:28:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:01.912 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:33:01.912 [2024-04-24 05:28:39.130518] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:33:01.912 [2024-04-24 05:28:39.130598] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.912 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.912 [2024-04-24 05:28:39.168825] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:02.170 [2024-04-24 05:28:39.196529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.170 [2024-04-24 05:28:39.284626] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.170 [2024-04-24 05:28:39.284707] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.170 [2024-04-24 05:28:39.284722] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.170 [2024-04-24 05:28:39.284734] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.170 [2024-04-24 05:28:39.284745] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.170 [2024-04-24 05:28:39.284801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.170 [2024-04-24 05:28:39.284829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.170 [2024-04-24 05:28:39.284888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.171 [2024-04-24 05:28:39.284890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.171 05:28:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:02.171 05:28:39 -- common/autotest_common.sh@850 -- # return 0 00:33:02.171 05:28:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:02.171 05:28:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:02.171 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:33:02.428 05:28:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:02.429 05:28:39 -- scripts/common.sh@309 -- # local bdf bdfs 00:33:02.429 05:28:39 -- scripts/common.sh@310 -- # local nvmes 00:33:02.429 05:28:39 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:33:02.429 05:28:39 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:02.429 05:28:39 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:02.429 05:28:39 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:33:02.429 05:28:39 -- scripts/common.sh@320 -- # uname -s 00:33:02.429 05:28:39 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:02.429 05:28:39 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:02.429 05:28:39 -- scripts/common.sh@325 -- # (( 1 )) 00:33:02.429 05:28:39 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:02.429 05:28:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:02.429 05:28:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:02.429 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:33:02.429 ************************************ 00:33:02.429 START TEST spdk_target_abort 00:33:02.429 ************************************ 00:33:02.429 05:28:39 -- common/autotest_common.sh@1111 -- # spdk_target 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:02.429 05:28:39 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:33:02.429 05:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.429 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 spdk_targetn1 00:33:05.720 05:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.720 05:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:05.720 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 [2024-04-24 05:28:42.395581] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.720 05:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:05.720 05:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:05.720 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 05:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:05.720 05:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:05.720 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 05:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:05.720 05:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:05.720 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 [2024-04-24 05:28:42.427877] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.720 05:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:05.720 05:28:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.720 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.003 Initializing NVMe Controllers 00:33:09.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:09.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:09.003 Initialization complete. Launching workers. 00:33:09.003 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11111, failed: 0 00:33:09.003 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1285, failed to submit 9826 00:33:09.003 success 741, unsuccess 544, failed 0 00:33:09.003 05:28:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:09.003 05:28:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:09.003 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.286 [2024-04-24 05:28:48.832657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fcdb0 is same with the state(5) to be set 00:33:12.286 [2024-04-24 05:28:48.832710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fcdb0 is same with the state(5) to be set 00:33:12.286 Initializing NVMe Controllers 00:33:12.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:12.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:12.286 Initialization complete. Launching workers. 00:33:12.286 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8652, failed: 0 00:33:12.286 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1210, failed to submit 7442 00:33:12.286 success 366, unsuccess 844, failed 0 00:33:12.286 05:28:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:12.286 05:28:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:12.286 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.815 Initializing NVMe Controllers 00:33:14.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:14.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:14.815 Initialization complete. Launching workers. 00:33:14.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31465, failed: 0 00:33:14.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2749, failed to submit 28716 00:33:14.815 success 535, unsuccess 2214, failed 0 00:33:14.815 05:28:52 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:14.815 05:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:14.815 05:28:52 -- common/autotest_common.sh@10 -- # set +x 00:33:15.072 05:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.072 05:28:52 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:15.072 05:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.072 05:28:52 -- common/autotest_common.sh@10 -- # set +x 00:33:16.446 05:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:16.446 05:28:53 -- target/abort_qd_sizes.sh@61 -- # killprocess 2042319 00:33:16.446 05:28:53 -- common/autotest_common.sh@936 -- # '[' -z 2042319 ']' 00:33:16.446 05:28:53 -- common/autotest_common.sh@940 -- # kill -0 2042319 00:33:16.446 05:28:53 -- common/autotest_common.sh@941 -- # uname 00:33:16.446 05:28:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:16.446 05:28:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2042319 00:33:16.446 05:28:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:16.446 05:28:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:16.446 05:28:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2042319' 00:33:16.446 killing process with pid 2042319 00:33:16.446 05:28:53 -- common/autotest_common.sh@955 -- # kill 2042319 00:33:16.446 05:28:53 -- common/autotest_common.sh@960 -- # wait 2042319 00:33:16.446 00:33:16.446 real 0m14.153s 00:33:16.446 user 0m53.900s 00:33:16.446 sys 0m2.699s 00:33:16.446 05:28:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:16.446 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:33:16.446 ************************************ 00:33:16.447 END TEST spdk_target_abort 00:33:16.447 ************************************ 00:33:16.706 05:28:53 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:16.706 05:28:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:16.706 05:28:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:16.706 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:33:16.706 ************************************ 00:33:16.706 START TEST kernel_target_abort 00:33:16.706 ************************************ 00:33:16.706 05:28:53 -- common/autotest_common.sh@1111 -- # kernel_target 00:33:16.706 05:28:53 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:16.706 05:28:53 -- nvmf/common.sh@717 -- # local ip 00:33:16.706 05:28:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:33:16.706 05:28:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:33:16.706 05:28:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.706 05:28:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.706 05:28:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:33:16.706 05:28:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.706 05:28:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:33:16.706 05:28:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:33:16.706 05:28:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:33:16.706 05:28:53 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:16.706 05:28:53 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:16.706 05:28:53 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:33:16.707 05:28:53 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:16.707 05:28:53 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:16.707 05:28:53 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:16.707 05:28:53 -- nvmf/common.sh@628 -- # local block nvme 00:33:16.707 05:28:53 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:33:16.707 05:28:53 -- nvmf/common.sh@631 -- # modprobe nvmet 00:33:16.707 05:28:53 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:16.707 05:28:53 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:17.645 Waiting for block devices as requested 00:33:17.645 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:17.905 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:17.905 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:18.164 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:18.164 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:18.164 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:18.164 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:18.164 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:18.424 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:18.424 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:18.424 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:18.682 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:18.682 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:18.682 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:18.682 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:18.939 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:18.939 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:18.939 05:28:56 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:18.939 05:28:56 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:18.939 05:28:56 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:33:18.939 05:28:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:18.939 05:28:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:18.939 05:28:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:18.939 05:28:56 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:33:18.939 05:28:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:18.939 05:28:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:18.939 No valid GPT data, bailing 00:33:18.939 05:28:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:18.939 05:28:56 -- scripts/common.sh@391 -- # pt= 00:33:18.939 05:28:56 -- scripts/common.sh@392 -- # return 1 00:33:18.939 05:28:56 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:33:18.939 05:28:56 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:33:18.939 05:28:56 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:18.939 05:28:56 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:19.197 05:28:56 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:19.197 05:28:56 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:19.197 05:28:56 -- nvmf/common.sh@656 -- # echo 1 00:33:19.197 05:28:56 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:33:19.197 05:28:56 -- nvmf/common.sh@658 -- # echo 1 00:33:19.197 05:28:56 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:33:19.197 05:28:56 -- nvmf/common.sh@661 -- # echo tcp 00:33:19.197 05:28:56 -- nvmf/common.sh@662 -- # echo 4420 00:33:19.197 05:28:56 -- nvmf/common.sh@663 -- # echo ipv4 00:33:19.197 05:28:56 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:19.197 05:28:56 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:19.197 00:33:19.197 Discovery Log Number of Records 2, Generation counter 2 00:33:19.197 =====Discovery Log Entry 0====== 00:33:19.197 trtype: tcp 00:33:19.197 adrfam: ipv4 00:33:19.197 subtype: current discovery subsystem 00:33:19.197 treq: not specified, sq flow control disable supported 00:33:19.197 portid: 1 00:33:19.197 trsvcid: 4420 00:33:19.197 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:19.197 traddr: 10.0.0.1 00:33:19.197 eflags: none 00:33:19.197 sectype: none 00:33:19.197 =====Discovery Log Entry 1====== 00:33:19.197 trtype: tcp 00:33:19.197 adrfam: ipv4 00:33:19.197 subtype: nvme subsystem 00:33:19.197 treq: not specified, sq flow control disable supported 00:33:19.197 portid: 1 00:33:19.197 trsvcid: 4420 00:33:19.197 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:19.197 traddr: 10.0.0.1 00:33:19.197 eflags: none 00:33:19.197 sectype: none 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:19.197 05:28:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.197 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.489 Initializing NVMe Controllers 00:33:22.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:22.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:22.489 Initialization complete. Launching workers. 00:33:22.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34583, failed: 0 00:33:22.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34583, failed to submit 0 00:33:22.489 success 0, unsuccess 34583, failed 0 00:33:22.489 05:28:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:22.489 05:28:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:22.489 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.779 Initializing NVMe Controllers 00:33:25.779 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:25.779 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:25.779 Initialization complete. Launching workers. 00:33:25.779 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64577, failed: 0 00:33:25.779 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16286, failed to submit 48291 00:33:25.779 success 0, unsuccess 16286, failed 0 00:33:25.779 05:29:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:25.779 05:29:02 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:25.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.065 Initializing NVMe Controllers 00:33:29.065 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:29.065 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:29.065 Initialization complete. Launching workers. 00:33:29.065 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62798, failed: 0 00:33:29.065 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15690, failed to submit 47108 00:33:29.065 success 0, unsuccess 15690, failed 0 00:33:29.065 05:29:05 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:29.065 05:29:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:29.065 05:29:05 -- nvmf/common.sh@675 -- # echo 0 00:33:29.065 05:29:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.065 05:29:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:29.065 05:29:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:29.065 05:29:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.065 05:29:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:33:29.065 05:29:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:33:29.065 05:29:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:29.631 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.631 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.631 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.889 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.889 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:30.828 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:30.828 00:33:30.828 real 0m14.178s 00:33:30.828 user 0m5.097s 00:33:30.828 sys 0m3.310s 00:33:30.828 05:29:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:30.828 05:29:08 -- common/autotest_common.sh@10 -- # set +x 00:33:30.828 ************************************ 00:33:30.828 END TEST kernel_target_abort 00:33:30.828 ************************************ 00:33:30.828 05:29:08 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:30.828 05:29:08 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:30.828 05:29:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:30.828 05:29:08 -- nvmf/common.sh@117 -- # sync 00:33:30.828 05:29:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:30.828 05:29:08 -- nvmf/common.sh@120 -- # set +e 00:33:30.828 05:29:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:30.828 05:29:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:30.828 rmmod nvme_tcp 00:33:30.828 rmmod nvme_fabrics 00:33:30.828 rmmod nvme_keyring 00:33:30.828 05:29:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:30.828 05:29:08 -- nvmf/common.sh@124 -- # set -e 00:33:30.828 05:29:08 -- nvmf/common.sh@125 -- # return 0 00:33:30.828 05:29:08 -- nvmf/common.sh@478 -- # '[' -n 2042319 ']' 00:33:30.828 05:29:08 -- nvmf/common.sh@479 -- # killprocess 2042319 00:33:30.828 05:29:08 -- common/autotest_common.sh@936 -- # '[' -z 2042319 ']' 00:33:30.828 05:29:08 -- common/autotest_common.sh@940 -- # kill -0 2042319 00:33:30.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2042319) - No such process 00:33:30.828 05:29:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2042319 is not found' 00:33:30.828 Process with pid 2042319 is not found 00:33:30.828 05:29:08 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:30.828 05:29:08 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:32.204 Waiting for block devices as requested 00:33:32.204 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:32.204 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:32.204 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:32.464 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:32.464 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:32.464 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:32.464 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:32.724 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:32.724 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:32.724 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:32.724 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:32.983 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:32.983 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:32.983 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:32.983 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:33.242 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:33.242 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:33.242 05:29:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:33.242 05:29:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:33.242 05:29:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.242 05:29:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.242 05:29:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.242 05:29:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:33.242 05:29:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.779 05:29:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:35.779 00:33:35.779 real 0m37.847s 00:33:35.779 user 1m1.122s 00:33:35.779 sys 0m9.408s 00:33:35.779 05:29:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:35.779 05:29:12 -- common/autotest_common.sh@10 -- # set +x 00:33:35.779 ************************************ 00:33:35.779 END TEST nvmf_abort_qd_sizes 00:33:35.779 ************************************ 00:33:35.779 05:29:12 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:35.779 05:29:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:35.779 05:29:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:35.779 05:29:12 -- common/autotest_common.sh@10 -- # set +x 00:33:35.779 ************************************ 00:33:35.779 START TEST keyring_file 00:33:35.779 ************************************ 00:33:35.779 05:29:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:35.779 * Looking for test storage... 00:33:35.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:35.779 05:29:12 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:35.780 05:29:12 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.780 05:29:12 -- nvmf/common.sh@7 -- # uname -s 00:33:35.780 05:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.780 05:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.780 05:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.780 05:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.780 05:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.780 05:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.780 05:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.780 05:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.780 05:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.780 05:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.780 05:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.780 05:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.780 05:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.780 05:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.780 05:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.780 05:29:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.780 05:29:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.780 05:29:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.780 05:29:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.780 05:29:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.780 05:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.780 05:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.780 05:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.780 05:29:12 -- paths/export.sh@5 -- # export PATH 00:33:35.780 05:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.780 05:29:12 -- nvmf/common.sh@47 -- # : 0 00:33:35.780 05:29:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.780 05:29:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.780 05:29:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.780 05:29:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.780 05:29:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.780 05:29:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.780 05:29:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.780 05:29:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.780 05:29:12 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:35.780 05:29:12 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:35.780 05:29:12 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:35.780 05:29:12 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:35.780 05:29:12 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:35.780 05:29:12 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:35.780 05:29:12 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:35.780 05:29:12 -- keyring/common.sh@15 -- # local name key digest path 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # name=key0 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # digest=0 00:33:35.780 05:29:12 -- keyring/common.sh@18 -- # mktemp 00:33:35.780 05:29:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.DaVPW4haHA 00:33:35.780 05:29:12 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:35.780 05:29:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:35.780 05:29:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # digest=0 00:33:35.780 05:29:12 -- nvmf/common.sh@694 -- # python - 00:33:35.780 05:29:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DaVPW4haHA 00:33:35.780 05:29:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.DaVPW4haHA 00:33:35.780 05:29:12 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DaVPW4haHA 00:33:35.780 05:29:12 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:35.780 05:29:12 -- keyring/common.sh@15 -- # local name key digest path 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # name=key1 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:35.780 05:29:12 -- keyring/common.sh@17 -- # digest=0 00:33:35.780 05:29:12 -- keyring/common.sh@18 -- # mktemp 00:33:35.780 05:29:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.8Ate7L6put 00:33:35.780 05:29:12 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:35.780 05:29:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:35.780 05:29:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:33:35.780 05:29:12 -- nvmf/common.sh@693 -- # digest=0 00:33:35.780 05:29:12 -- nvmf/common.sh@694 -- # python - 00:33:35.780 05:29:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8Ate7L6put 00:33:35.780 05:29:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.8Ate7L6put 00:33:35.780 05:29:12 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8Ate7L6put 00:33:35.780 05:29:12 -- keyring/file.sh@30 -- # tgtpid=2048103 00:33:35.780 05:29:12 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:35.780 05:29:12 -- keyring/file.sh@32 -- # waitforlisten 2048103 00:33:35.780 05:29:12 -- common/autotest_common.sh@817 -- # '[' -z 2048103 ']' 00:33:35.780 05:29:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.780 05:29:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:35.780 05:29:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.780 05:29:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:35.780 05:29:12 -- common/autotest_common.sh@10 -- # set +x 00:33:35.780 [2024-04-24 05:29:12.836170] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:33:35.780 [2024-04-24 05:29:12.836247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048103 ] 00:33:35.780 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.780 [2024-04-24 05:29:12.868425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:35.780 [2024-04-24 05:29:12.894007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.780 [2024-04-24 05:29:12.980172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.039 05:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:36.039 05:29:13 -- common/autotest_common.sh@850 -- # return 0 00:33:36.039 05:29:13 -- keyring/file.sh@33 -- # rpc_cmd 00:33:36.039 05:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:36.039 05:29:13 -- common/autotest_common.sh@10 -- # set +x 00:33:36.039 [2024-04-24 05:29:13.226317] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.039 null0 00:33:36.039 [2024-04-24 05:29:13.258372] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:36.039 [2024-04-24 05:29:13.258851] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:36.039 [2024-04-24 05:29:13.266391] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:36.039 05:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:36.039 05:29:13 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:36.039 05:29:13 -- common/autotest_common.sh@638 -- # local es=0 00:33:36.039 05:29:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:36.039 05:29:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:33:36.039 05:29:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:36.039 05:29:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:33:36.039 05:29:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:36.039 05:29:13 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:36.039 05:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:36.039 05:29:13 -- common/autotest_common.sh@10 -- # set +x 00:33:36.039 [2024-04-24 05:29:13.274401] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:33:36.039 { 00:33:36.039 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.039 "secure_channel": false, 00:33:36.039 "listen_address": { 00:33:36.039 "trtype": "tcp", 00:33:36.039 "traddr": "127.0.0.1", 00:33:36.039 "trsvcid": "4420" 00:33:36.039 }, 00:33:36.039 "method": "nvmf_subsystem_add_listener", 00:33:36.039 "req_id": 1 00:33:36.039 } 00:33:36.039 Got JSON-RPC error response 00:33:36.039 response: 00:33:36.039 { 00:33:36.039 "code": -32602, 00:33:36.039 "message": "Invalid parameters" 00:33:36.039 } 00:33:36.039 05:29:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:33:36.039 05:29:13 -- common/autotest_common.sh@641 -- # es=1 00:33:36.039 05:29:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:36.039 05:29:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:36.039 05:29:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:36.039 05:29:13 -- keyring/file.sh@46 -- # bperfpid=2048107 00:33:36.039 05:29:13 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:36.039 05:29:13 -- keyring/file.sh@48 -- # waitforlisten 2048107 /var/tmp/bperf.sock 00:33:36.039 05:29:13 -- common/autotest_common.sh@817 -- # '[' -z 2048107 ']' 00:33:36.039 05:29:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.039 05:29:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:36.039 05:29:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.039 05:29:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:36.039 05:29:13 -- common/autotest_common.sh@10 -- # set +x 00:33:36.298 [2024-04-24 05:29:13.320889] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:33:36.298 [2024-04-24 05:29:13.320966] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048107 ] 00:33:36.298 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.298 [2024-04-24 05:29:13.351258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:36.298 [2024-04-24 05:29:13.376493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.298 [2024-04-24 05:29:13.464488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.556 05:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:36.556 05:29:13 -- common/autotest_common.sh@850 -- # return 0 00:33:36.556 05:29:13 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:36.556 05:29:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:36.556 05:29:13 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8Ate7L6put 00:33:36.556 05:29:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8Ate7L6put 00:33:36.814 05:29:14 -- keyring/file.sh@51 -- # get_key key0 00:33:36.814 05:29:14 -- keyring/file.sh@51 -- # jq -r .path 00:33:36.814 05:29:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.814 05:29:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.814 05:29:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.071 05:29:14 -- keyring/file.sh@51 -- # [[ /tmp/tmp.DaVPW4haHA == \/\t\m\p\/\t\m\p\.\D\a\V\P\W\4\h\a\H\A ]] 00:33:37.071 05:29:14 -- keyring/file.sh@52 -- # get_key key1 00:33:37.071 05:29:14 -- keyring/file.sh@52 -- # jq -r .path 00:33:37.071 05:29:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.071 05:29:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.071 05:29:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:37.328 05:29:14 -- keyring/file.sh@52 -- # [[ /tmp/tmp.8Ate7L6put == \/\t\m\p\/\t\m\p\.\8\A\t\e\7\L\6\p\u\t ]] 00:33:37.328 05:29:14 -- keyring/file.sh@53 -- # get_refcnt key0 00:33:37.328 05:29:14 -- keyring/common.sh@12 -- # get_key key0 00:33:37.328 05:29:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.328 05:29:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.328 05:29:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.328 05:29:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.585 05:29:14 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:37.585 05:29:14 -- keyring/file.sh@54 -- # get_refcnt key1 00:33:37.585 05:29:14 -- keyring/common.sh@12 -- # get_key key1 00:33:37.585 05:29:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.585 05:29:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.585 05:29:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.585 05:29:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:37.843 05:29:15 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:37.843 05:29:15 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:37.843 05:29:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.101 [2024-04-24 05:29:15.257568] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:38.101 nvme0n1 00:33:38.101 05:29:15 -- keyring/file.sh@59 -- # get_refcnt key0 00:33:38.101 05:29:15 -- keyring/common.sh@12 -- # get_key key0 00:33:38.101 05:29:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.101 05:29:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.101 05:29:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.101 05:29:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:38.359 05:29:15 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:38.359 05:29:15 -- keyring/file.sh@60 -- # get_refcnt key1 00:33:38.359 05:29:15 -- keyring/common.sh@12 -- # get_key key1 00:33:38.359 05:29:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.359 05:29:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.359 05:29:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.359 05:29:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:38.618 05:29:15 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:38.618 05:29:15 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.877 Running I/O for 1 seconds... 00:33:39.816 00:33:39.816 Latency(us) 00:33:39.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.816 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:39.816 nvme0n1 : 1.02 4692.80 18.33 0.00 0.00 26990.84 4223.43 29903.83 00:33:39.816 =================================================================================================================== 00:33:39.816 Total : 4692.80 18.33 0.00 0.00 26990.84 4223.43 29903.83 00:33:39.816 0 00:33:39.816 05:29:16 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:39.816 05:29:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:40.074 05:29:17 -- keyring/file.sh@65 -- # get_refcnt key0 00:33:40.074 05:29:17 -- keyring/common.sh@12 -- # get_key key0 00:33:40.074 05:29:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.074 05:29:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.074 05:29:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.074 05:29:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.332 05:29:17 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:40.332 05:29:17 -- keyring/file.sh@66 -- # get_refcnt key1 00:33:40.332 05:29:17 -- keyring/common.sh@12 -- # get_key key1 00:33:40.332 05:29:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.332 05:29:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.332 05:29:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.332 05:29:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.590 05:29:17 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:40.590 05:29:17 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:40.590 05:29:17 -- common/autotest_common.sh@638 -- # local es=0 00:33:40.590 05:29:17 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:40.590 05:29:17 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:40.590 05:29:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:40.590 05:29:17 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:40.590 05:29:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:40.590 05:29:17 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:40.590 05:29:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:40.848 [2024-04-24 05:29:17.934589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:40.848 [2024-04-24 05:29:17.935099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b08d0 (107): Transport endpoint is not connected 00:33:40.848 [2024-04-24 05:29:17.936088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b08d0 (9): Bad file descriptor 00:33:40.848 [2024-04-24 05:29:17.937086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:40.848 [2024-04-24 05:29:17.937110] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:40.848 [2024-04-24 05:29:17.937143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:40.848 request: 00:33:40.848 { 00:33:40.848 "name": "nvme0", 00:33:40.848 "trtype": "tcp", 00:33:40.848 "traddr": "127.0.0.1", 00:33:40.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.848 "adrfam": "ipv4", 00:33:40.848 "trsvcid": "4420", 00:33:40.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.848 "psk": "key1", 00:33:40.848 "method": "bdev_nvme_attach_controller", 00:33:40.848 "req_id": 1 00:33:40.848 } 00:33:40.848 Got JSON-RPC error response 00:33:40.848 response: 00:33:40.848 { 00:33:40.848 "code": -32602, 00:33:40.848 "message": "Invalid parameters" 00:33:40.848 } 00:33:40.848 05:29:17 -- common/autotest_common.sh@641 -- # es=1 00:33:40.848 05:29:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:40.848 05:29:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:40.848 05:29:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:40.848 05:29:17 -- keyring/file.sh@71 -- # get_refcnt key0 00:33:40.848 05:29:17 -- keyring/common.sh@12 -- # get_key key0 00:33:40.848 05:29:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.848 05:29:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.848 05:29:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.848 05:29:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.106 05:29:18 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:41.106 05:29:18 -- keyring/file.sh@72 -- # get_refcnt key1 00:33:41.106 05:29:18 -- keyring/common.sh@12 -- # get_key key1 00:33:41.106 05:29:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.106 05:29:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.106 05:29:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.106 05:29:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:41.364 05:29:18 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:41.364 05:29:18 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:41.364 05:29:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:41.622 05:29:18 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:41.622 05:29:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:41.880 05:29:18 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:41.880 05:29:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.880 05:29:18 -- keyring/file.sh@77 -- # jq length 00:33:42.137 05:29:19 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:42.137 05:29:19 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DaVPW4haHA 00:33:42.137 05:29:19 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.137 05:29:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:42.137 05:29:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.137 05:29:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:42.137 05:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:42.137 05:29:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:42.137 05:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:42.137 05:29:19 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.137 05:29:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.137 [2024-04-24 05:29:19.398874] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DaVPW4haHA': 0100660 00:33:42.137 [2024-04-24 05:29:19.398940] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:42.137 request: 00:33:42.137 { 00:33:42.137 "name": "key0", 00:33:42.137 "path": "/tmp/tmp.DaVPW4haHA", 00:33:42.137 "method": "keyring_file_add_key", 00:33:42.137 "req_id": 1 00:33:42.137 } 00:33:42.137 Got JSON-RPC error response 00:33:42.137 response: 00:33:42.137 { 00:33:42.137 "code": -1, 00:33:42.137 "message": "Operation not permitted" 00:33:42.137 } 00:33:42.395 05:29:19 -- common/autotest_common.sh@641 -- # es=1 00:33:42.395 05:29:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:42.395 05:29:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:42.395 05:29:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:42.395 05:29:19 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DaVPW4haHA 00:33:42.395 05:29:19 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.395 05:29:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DaVPW4haHA 00:33:42.395 05:29:19 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DaVPW4haHA 00:33:42.395 05:29:19 -- keyring/file.sh@88 -- # get_refcnt key0 00:33:42.395 05:29:19 -- keyring/common.sh@12 -- # get_key key0 00:33:42.395 05:29:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.395 05:29:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.395 05:29:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.395 05:29:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.652 05:29:19 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:42.652 05:29:19 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.652 05:29:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:42.652 05:29:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.652 05:29:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:42.652 05:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:42.652 05:29:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:42.652 05:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:42.652 05:29:19 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.652 05:29:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.921 [2024-04-24 05:29:20.132866] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DaVPW4haHA': No such file or directory 00:33:42.922 [2024-04-24 05:29:20.132916] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:42.922 [2024-04-24 05:29:20.132971] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:42.922 [2024-04-24 05:29:20.132986] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:42.922 [2024-04-24 05:29:20.132999] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:42.922 request: 00:33:42.922 { 00:33:42.922 "name": "nvme0", 00:33:42.922 "trtype": "tcp", 00:33:42.922 "traddr": "127.0.0.1", 00:33:42.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.922 "adrfam": "ipv4", 00:33:42.922 "trsvcid": "4420", 00:33:42.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.922 "psk": "key0", 00:33:42.922 "method": "bdev_nvme_attach_controller", 00:33:42.922 "req_id": 1 00:33:42.922 } 00:33:42.922 Got JSON-RPC error response 00:33:42.922 response: 00:33:42.922 { 00:33:42.922 "code": -19, 00:33:42.922 "message": "No such device" 00:33:42.922 } 00:33:42.922 05:29:20 -- common/autotest_common.sh@641 -- # es=1 00:33:42.922 05:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:42.922 05:29:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:42.922 05:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:42.922 05:29:20 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:42.922 05:29:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:43.181 05:29:20 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:43.181 05:29:20 -- keyring/common.sh@15 -- # local name key digest path 00:33:43.181 05:29:20 -- keyring/common.sh@17 -- # name=key0 00:33:43.181 05:29:20 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:43.181 05:29:20 -- keyring/common.sh@17 -- # digest=0 00:33:43.181 05:29:20 -- keyring/common.sh@18 -- # mktemp 00:33:43.181 05:29:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.YefoY9KZ1D 00:33:43.181 05:29:20 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:43.181 05:29:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:43.181 05:29:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:43.181 05:29:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:43.181 05:29:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:43.181 05:29:20 -- nvmf/common.sh@693 -- # digest=0 00:33:43.181 05:29:20 -- nvmf/common.sh@694 -- # python - 00:33:43.181 05:29:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YefoY9KZ1D 00:33:43.181 05:29:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.YefoY9KZ1D 00:33:43.181 05:29:20 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YefoY9KZ1D 00:33:43.181 05:29:20 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YefoY9KZ1D 00:33:43.181 05:29:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YefoY9KZ1D 00:33:43.439 05:29:20 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.439 05:29:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:44.006 nvme0n1 00:33:44.006 05:29:21 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:44.006 05:29:21 -- keyring/common.sh@12 -- # get_key key0 00:33:44.006 05:29:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.006 05:29:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.006 05:29:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.006 05:29:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.006 05:29:21 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:44.006 05:29:21 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:44.006 05:29:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:44.265 05:29:21 -- keyring/file.sh@101 -- # get_key key0 00:33:44.265 05:29:21 -- keyring/file.sh@101 -- # jq -r .removed 00:33:44.265 05:29:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.265 05:29:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.265 05:29:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.523 05:29:21 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:44.523 05:29:21 -- keyring/file.sh@102 -- # get_refcnt key0 00:33:44.523 05:29:21 -- keyring/common.sh@12 -- # get_key key0 00:33:44.523 05:29:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.523 05:29:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.523 05:29:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.523 05:29:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.781 05:29:21 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:44.781 05:29:21 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:44.781 05:29:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:45.039 05:29:22 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:45.039 05:29:22 -- keyring/file.sh@104 -- # jq length 00:33:45.039 05:29:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.298 05:29:22 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:45.298 05:29:22 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YefoY9KZ1D 00:33:45.298 05:29:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YefoY9KZ1D 00:33:45.556 05:29:22 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8Ate7L6put 00:33:45.556 05:29:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8Ate7L6put 00:33:45.814 05:29:22 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.814 05:29:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.071 nvme0n1 00:33:46.071 05:29:23 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:46.071 05:29:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:46.330 05:29:23 -- keyring/file.sh@112 -- # config='{ 00:33:46.330 "subsystems": [ 00:33:46.330 { 00:33:46.330 "subsystem": "keyring", 00:33:46.330 "config": [ 00:33:46.330 { 00:33:46.330 "method": "keyring_file_add_key", 00:33:46.330 "params": { 00:33:46.330 "name": "key0", 00:33:46.330 "path": "/tmp/tmp.YefoY9KZ1D" 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "keyring_file_add_key", 00:33:46.330 "params": { 00:33:46.330 "name": "key1", 00:33:46.330 "path": "/tmp/tmp.8Ate7L6put" 00:33:46.330 } 00:33:46.330 } 00:33:46.330 ] 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "subsystem": "iobuf", 00:33:46.330 "config": [ 00:33:46.330 { 00:33:46.330 "method": "iobuf_set_options", 00:33:46.330 "params": { 00:33:46.330 "small_pool_count": 8192, 00:33:46.330 "large_pool_count": 1024, 00:33:46.330 "small_bufsize": 8192, 00:33:46.330 "large_bufsize": 135168 00:33:46.330 } 00:33:46.330 } 00:33:46.330 ] 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "subsystem": "sock", 00:33:46.330 "config": [ 00:33:46.330 { 00:33:46.330 "method": "sock_impl_set_options", 00:33:46.330 "params": { 00:33:46.330 "impl_name": "posix", 00:33:46.330 "recv_buf_size": 2097152, 00:33:46.330 "send_buf_size": 2097152, 00:33:46.330 "enable_recv_pipe": true, 00:33:46.330 "enable_quickack": false, 00:33:46.330 "enable_placement_id": 0, 00:33:46.330 "enable_zerocopy_send_server": true, 00:33:46.330 "enable_zerocopy_send_client": false, 00:33:46.330 "zerocopy_threshold": 0, 00:33:46.330 "tls_version": 0, 00:33:46.330 "enable_ktls": false 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "sock_impl_set_options", 00:33:46.330 "params": { 00:33:46.330 "impl_name": "ssl", 00:33:46.330 "recv_buf_size": 4096, 00:33:46.330 "send_buf_size": 4096, 00:33:46.330 "enable_recv_pipe": true, 00:33:46.330 "enable_quickack": false, 00:33:46.330 "enable_placement_id": 0, 00:33:46.330 "enable_zerocopy_send_server": true, 00:33:46.330 "enable_zerocopy_send_client": false, 00:33:46.330 "zerocopy_threshold": 0, 00:33:46.330 "tls_version": 0, 00:33:46.330 "enable_ktls": false 00:33:46.330 } 00:33:46.330 } 00:33:46.330 ] 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "subsystem": "vmd", 00:33:46.330 "config": [] 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "subsystem": "accel", 00:33:46.330 "config": [ 00:33:46.330 { 00:33:46.330 "method": "accel_set_options", 00:33:46.330 "params": { 00:33:46.330 "small_cache_size": 128, 00:33:46.330 "large_cache_size": 16, 00:33:46.330 "task_count": 2048, 00:33:46.330 "sequence_count": 2048, 00:33:46.330 "buf_count": 2048 00:33:46.330 } 00:33:46.330 } 00:33:46.330 ] 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "subsystem": "bdev", 00:33:46.330 "config": [ 00:33:46.330 { 00:33:46.330 "method": "bdev_set_options", 00:33:46.330 "params": { 00:33:46.330 "bdev_io_pool_size": 65535, 00:33:46.330 "bdev_io_cache_size": 256, 00:33:46.330 "bdev_auto_examine": true, 00:33:46.330 "iobuf_small_cache_size": 128, 00:33:46.330 "iobuf_large_cache_size": 16 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "bdev_raid_set_options", 00:33:46.330 "params": { 00:33:46.330 "process_window_size_kb": 1024 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "bdev_iscsi_set_options", 00:33:46.330 "params": { 00:33:46.330 "timeout_sec": 30 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "bdev_nvme_set_options", 00:33:46.330 "params": { 00:33:46.330 "action_on_timeout": "none", 00:33:46.330 "timeout_us": 0, 00:33:46.330 "timeout_admin_us": 0, 00:33:46.330 "keep_alive_timeout_ms": 10000, 00:33:46.330 "arbitration_burst": 0, 00:33:46.330 "low_priority_weight": 0, 00:33:46.330 "medium_priority_weight": 0, 00:33:46.330 "high_priority_weight": 0, 00:33:46.330 "nvme_adminq_poll_period_us": 10000, 00:33:46.330 "nvme_ioq_poll_period_us": 0, 00:33:46.330 "io_queue_requests": 512, 00:33:46.330 "delay_cmd_submit": true, 00:33:46.330 "transport_retry_count": 4, 00:33:46.330 "bdev_retry_count": 3, 00:33:46.330 "transport_ack_timeout": 0, 00:33:46.330 "ctrlr_loss_timeout_sec": 0, 00:33:46.330 "reconnect_delay_sec": 0, 00:33:46.330 "fast_io_fail_timeout_sec": 0, 00:33:46.330 "disable_auto_failback": false, 00:33:46.330 "generate_uuids": false, 00:33:46.330 "transport_tos": 0, 00:33:46.330 "nvme_error_stat": false, 00:33:46.330 "rdma_srq_size": 0, 00:33:46.330 "io_path_stat": false, 00:33:46.330 "allow_accel_sequence": false, 00:33:46.330 "rdma_max_cq_size": 0, 00:33:46.330 "rdma_cm_event_timeout_ms": 0, 00:33:46.330 "dhchap_digests": [ 00:33:46.330 "sha256", 00:33:46.330 "sha384", 00:33:46.330 "sha512" 00:33:46.330 ], 00:33:46.330 "dhchap_dhgroups": [ 00:33:46.330 "null", 00:33:46.330 "ffdhe2048", 00:33:46.330 "ffdhe3072", 00:33:46.330 "ffdhe4096", 00:33:46.330 "ffdhe6144", 00:33:46.330 "ffdhe8192" 00:33:46.330 ] 00:33:46.330 } 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "method": "bdev_nvme_attach_controller", 00:33:46.330 "params": { 00:33:46.330 "name": "nvme0", 00:33:46.330 "trtype": "TCP", 00:33:46.330 "adrfam": "IPv4", 00:33:46.330 "traddr": "127.0.0.1", 00:33:46.330 "trsvcid": "4420", 00:33:46.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.331 "prchk_reftag": false, 00:33:46.331 "prchk_guard": false, 00:33:46.331 "ctrlr_loss_timeout_sec": 0, 00:33:46.331 "reconnect_delay_sec": 0, 00:33:46.331 "fast_io_fail_timeout_sec": 0, 00:33:46.331 "psk": "key0", 00:33:46.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.331 "hdgst": false, 00:33:46.331 "ddgst": false 00:33:46.331 } 00:33:46.331 }, 00:33:46.331 { 00:33:46.331 "method": "bdev_nvme_set_hotplug", 00:33:46.331 "params": { 00:33:46.331 "period_us": 100000, 00:33:46.331 "enable": false 00:33:46.331 } 00:33:46.331 }, 00:33:46.331 { 00:33:46.331 "method": "bdev_wait_for_examine" 00:33:46.331 } 00:33:46.331 ] 00:33:46.331 }, 00:33:46.331 { 00:33:46.331 "subsystem": "nbd", 00:33:46.331 "config": [] 00:33:46.331 } 00:33:46.331 ] 00:33:46.331 }' 00:33:46.331 05:29:23 -- keyring/file.sh@114 -- # killprocess 2048107 00:33:46.331 05:29:23 -- common/autotest_common.sh@936 -- # '[' -z 2048107 ']' 00:33:46.331 05:29:23 -- common/autotest_common.sh@940 -- # kill -0 2048107 00:33:46.331 05:29:23 -- common/autotest_common.sh@941 -- # uname 00:33:46.331 05:29:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:46.331 05:29:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2048107 00:33:46.331 05:29:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:46.331 05:29:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:46.331 05:29:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2048107' 00:33:46.331 killing process with pid 2048107 00:33:46.331 05:29:23 -- common/autotest_common.sh@955 -- # kill 2048107 00:33:46.331 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.331 00:33:46.331 Latency(us) 00:33:46.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.331 =================================================================================================================== 00:33:46.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.331 05:29:23 -- common/autotest_common.sh@960 -- # wait 2048107 00:33:46.589 05:29:23 -- keyring/file.sh@117 -- # bperfpid=2049519 00:33:46.589 05:29:23 -- keyring/file.sh@119 -- # waitforlisten 2049519 /var/tmp/bperf.sock 00:33:46.589 05:29:23 -- common/autotest_common.sh@817 -- # '[' -z 2049519 ']' 00:33:46.589 05:29:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.589 05:29:23 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:46.589 05:29:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:46.589 05:29:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.589 05:29:23 -- keyring/file.sh@115 -- # echo '{ 00:33:46.589 "subsystems": [ 00:33:46.589 { 00:33:46.589 "subsystem": "keyring", 00:33:46.589 "config": [ 00:33:46.589 { 00:33:46.589 "method": "keyring_file_add_key", 00:33:46.589 "params": { 00:33:46.589 "name": "key0", 00:33:46.589 "path": "/tmp/tmp.YefoY9KZ1D" 00:33:46.589 } 00:33:46.589 }, 00:33:46.589 { 00:33:46.589 "method": "keyring_file_add_key", 00:33:46.589 "params": { 00:33:46.589 "name": "key1", 00:33:46.589 "path": "/tmp/tmp.8Ate7L6put" 00:33:46.589 } 00:33:46.589 } 00:33:46.589 ] 00:33:46.589 }, 00:33:46.589 { 00:33:46.589 "subsystem": "iobuf", 00:33:46.589 "config": [ 00:33:46.590 { 00:33:46.590 "method": "iobuf_set_options", 00:33:46.590 "params": { 00:33:46.590 "small_pool_count": 8192, 00:33:46.590 "large_pool_count": 1024, 00:33:46.590 "small_bufsize": 8192, 00:33:46.590 "large_bufsize": 135168 00:33:46.590 } 00:33:46.590 } 00:33:46.590 ] 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "subsystem": "sock", 00:33:46.590 "config": [ 00:33:46.590 { 00:33:46.590 "method": "sock_impl_set_options", 00:33:46.590 "params": { 00:33:46.590 "impl_name": "posix", 00:33:46.590 "recv_buf_size": 2097152, 00:33:46.590 "send_buf_size": 2097152, 00:33:46.590 "enable_recv_pipe": true, 00:33:46.590 "enable_quickack": false, 00:33:46.590 "enable_placement_id": 0, 00:33:46.590 "enable_zerocopy_send_server": true, 00:33:46.590 "enable_zerocopy_send_client": false, 00:33:46.590 "zerocopy_threshold": 0, 00:33:46.590 "tls_version": 0, 00:33:46.590 "enable_ktls": false 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "sock_impl_set_options", 00:33:46.590 "params": { 00:33:46.590 "impl_name": "ssl", 00:33:46.590 "recv_buf_size": 4096, 00:33:46.590 "send_buf_size": 4096, 00:33:46.590 "enable_recv_pipe": true, 00:33:46.590 "enable_quickack": false, 00:33:46.590 "enable_placement_id": 0, 00:33:46.590 "enable_zerocopy_send_server": true, 00:33:46.590 "enable_zerocopy_send_client": false, 00:33:46.590 "zerocopy_threshold": 0, 00:33:46.590 "tls_version": 0, 00:33:46.590 "enable_ktls": false 00:33:46.590 } 00:33:46.590 } 00:33:46.590 ] 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "subsystem": "vmd", 00:33:46.590 "config": [] 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "subsystem": "accel", 00:33:46.590 "config": [ 00:33:46.590 { 00:33:46.590 "method": "accel_set_options", 00:33:46.590 "params": { 00:33:46.590 "small_cache_size": 128, 00:33:46.590 "large_cache_size": 16, 00:33:46.590 "task_count": 2048, 00:33:46.590 "sequence_count": 2048, 00:33:46.590 "buf_count": 2048 00:33:46.590 } 00:33:46.590 } 00:33:46.590 ] 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "subsystem": "bdev", 00:33:46.590 "config": [ 00:33:46.590 { 00:33:46.590 "method": "bdev_set_options", 00:33:46.590 "params": { 00:33:46.590 "bdev_io_pool_size": 65535, 00:33:46.590 "bdev_io_cache_size": 256, 00:33:46.590 "bdev_auto_examine": true, 00:33:46.590 "iobuf_small_cache_size": 128, 00:33:46.590 "iobuf_large_cache_size": 16 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_raid_set_options", 00:33:46.590 "params": { 00:33:46.590 "process_window_size_kb": 1024 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_iscsi_set_options", 00:33:46.590 "params": { 00:33:46.590 "timeout_sec": 30 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_nvme_set_options", 00:33:46.590 "params": { 00:33:46.590 "action_on_timeout": "none", 00:33:46.590 "timeout_us": 0, 00:33:46.590 "timeout_admin_us": 0, 00:33:46.590 "keep_alive_timeout_ms": 10000, 00:33:46.590 "arbitration_burst": 0, 00:33:46.590 "low_priority_weight": 0, 00:33:46.590 "medium_priority_weight": 0, 00:33:46.590 "high_priority_weight": 0, 00:33:46.590 "nvme_adminq_poll_period_us": 10000, 00:33:46.590 "nvme_ioq_poll_period_us": 0, 00:33:46.590 "io_queue_requests": 512, 00:33:46.590 "delay_cmd_submit": true, 00:33:46.590 "transport_retry_count": 4, 00:33:46.590 "bdev_retry_count": 3, 00:33:46.590 "transport_ack_timeout": 0, 00:33:46.590 "ctrlr_loss_timeout_sec": 0, 00:33:46.590 "reconnect_delay_sec": 0, 00:33:46.590 "fast_io_fail_timeout_sec": 0, 00:33:46.590 "disable_auto_failback": false, 00:33:46.590 "generate_uuids": false, 00:33:46.590 "transport_tos": 0, 00:33:46.590 "nvme_error_stat": false, 00:33:46.590 "rdma_srq_size": 0, 00:33:46.590 "io_path_stat": false, 00:33:46.590 "allow_accel_sequence": false, 00:33:46.590 "rdma_max_cq_size": 0, 00:33:46.590 "rdma_cm_event_timeout_ms": 0, 00:33:46.590 "dhchap_digests": [ 00:33:46.590 "sha256", 00:33:46.590 "sha384", 00:33:46.590 "sha512" 00:33:46.590 ], 00:33:46.590 "dhchap_dhgroups": [ 00:33:46.590 "null", 00:33:46.590 "ffdhe2048", 00:33:46.590 "ffdhe3072", 00:33:46.590 "ffdhe4096", 00:33:46.590 "ffdhe6144", 00:33:46.590 "ffdhe8192" 00:33:46.590 ] 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_nvme_attach_controller", 00:33:46.590 "params": { 00:33:46.590 "name": "nvme0", 00:33:46.590 "trtype": "TCP", 00:33:46.590 "adrfam": "IPv4", 00:33:46.590 "traddr": "127.0.0.1", 00:33:46.590 "trsvcid": "4420", 00:33:46.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.590 "prchk_reftag": false, 00:33:46.590 "prchk_guard": false, 00:33:46.590 "ctrlr_loss_timeout_sec": 0, 00:33:46.590 "reconnect_delay_sec": 0, 00:33:46.590 "fast_io_fail_timeout_sec": 0, 00:33:46.590 "psk": "key0", 00:33:46.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.590 "hdgst": false, 00:33:46.590 "ddgst": false 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_nvme_set_hotplug", 00:33:46.590 "params": { 00:33:46.590 "period_us": 100000, 00:33:46.590 "enable": false 00:33:46.590 } 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "method": "bdev_wait_for_examine" 00:33:46.590 } 00:33:46.590 ] 00:33:46.590 }, 00:33:46.590 { 00:33:46.590 "subsystem": "nbd", 00:33:46.590 "config": [] 00:33:46.590 } 00:33:46.590 ] 00:33:46.590 }' 00:33:46.590 05:29:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:46.590 05:29:23 -- common/autotest_common.sh@10 -- # set +x 00:33:46.590 [2024-04-24 05:29:23.859360] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 24.07.0-rc0 initialization... 00:33:46.590 [2024-04-24 05:29:23.859452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049519 ] 00:33:46.850 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.850 [2024-04-24 05:29:23.892542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:46.850 [2024-04-24 05:29:23.922783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.850 [2024-04-24 05:29:24.006152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.110 [2024-04-24 05:29:24.185747] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:47.676 05:29:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:47.676 05:29:24 -- common/autotest_common.sh@850 -- # return 0 00:33:47.676 05:29:24 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:47.676 05:29:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.676 05:29:24 -- keyring/file.sh@120 -- # jq length 00:33:47.934 05:29:25 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:47.934 05:29:25 -- keyring/file.sh@121 -- # get_refcnt key0 00:33:47.934 05:29:25 -- keyring/common.sh@12 -- # get_key key0 00:33:47.934 05:29:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.934 05:29:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.934 05:29:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.934 05:29:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:48.192 05:29:25 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:48.192 05:29:25 -- keyring/file.sh@122 -- # get_refcnt key1 00:33:48.192 05:29:25 -- keyring/common.sh@12 -- # get_key key1 00:33:48.192 05:29:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:48.192 05:29:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.192 05:29:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.192 05:29:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:48.450 05:29:25 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:48.450 05:29:25 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:48.450 05:29:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:48.450 05:29:25 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:48.708 05:29:25 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:48.708 05:29:25 -- keyring/file.sh@1 -- # cleanup 00:33:48.708 05:29:25 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YefoY9KZ1D /tmp/tmp.8Ate7L6put 00:33:48.708 05:29:25 -- keyring/file.sh@20 -- # killprocess 2049519 00:33:48.708 05:29:25 -- common/autotest_common.sh@936 -- # '[' -z 2049519 ']' 00:33:48.708 05:29:25 -- common/autotest_common.sh@940 -- # kill -0 2049519 00:33:48.708 05:29:25 -- common/autotest_common.sh@941 -- # uname 00:33:48.708 05:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:48.708 05:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2049519 00:33:48.708 05:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:48.708 05:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:48.708 05:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2049519' 00:33:48.708 killing process with pid 2049519 00:33:48.708 05:29:25 -- common/autotest_common.sh@955 -- # kill 2049519 00:33:48.708 Received shutdown signal, test time was about 1.000000 seconds 00:33:48.708 00:33:48.708 Latency(us) 00:33:48.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.708 =================================================================================================================== 00:33:48.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:48.708 05:29:25 -- common/autotest_common.sh@960 -- # wait 2049519 00:33:48.968 05:29:26 -- keyring/file.sh@21 -- # killprocess 2048103 00:33:48.968 05:29:26 -- common/autotest_common.sh@936 -- # '[' -z 2048103 ']' 00:33:48.968 05:29:26 -- common/autotest_common.sh@940 -- # kill -0 2048103 00:33:48.968 05:29:26 -- common/autotest_common.sh@941 -- # uname 00:33:48.968 05:29:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:48.968 05:29:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2048103 00:33:48.968 05:29:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:48.968 05:29:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:48.968 05:29:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2048103' 00:33:48.968 killing process with pid 2048103 00:33:48.968 05:29:26 -- common/autotest_common.sh@955 -- # kill 2048103 00:33:48.968 [2024-04-24 05:29:26.035431] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:48.968 05:29:26 -- common/autotest_common.sh@960 -- # wait 2048103 00:33:49.226 00:33:49.226 real 0m13.797s 00:33:49.226 user 0m34.130s 00:33:49.226 sys 0m3.230s 00:33:49.226 05:29:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:49.226 05:29:26 -- common/autotest_common.sh@10 -- # set +x 00:33:49.226 ************************************ 00:33:49.226 END TEST keyring_file 00:33:49.226 ************************************ 00:33:49.226 05:29:26 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:49.226 05:29:26 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:49.226 05:29:26 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:49.227 05:29:26 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:49.227 05:29:26 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:49.227 05:29:26 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:49.227 05:29:26 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:49.227 05:29:26 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:49.227 05:29:26 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:49.227 05:29:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:49.227 05:29:26 -- common/autotest_common.sh@10 -- # set +x 00:33:49.227 05:29:26 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:49.227 05:29:26 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:49.227 05:29:26 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:49.227 05:29:26 -- common/autotest_common.sh@10 -- # set +x 00:33:51.135 INFO: APP EXITING 00:33:51.135 INFO: killing all VMs 00:33:51.135 INFO: killing vhost app 00:33:51.135 INFO: EXIT DONE 00:33:52.071 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:52.071 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:52.071 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:52.071 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:52.071 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:52.071 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:52.071 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:52.071 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:52.330 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:52.330 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:52.330 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:52.330 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:52.330 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:52.330 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:52.330 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:52.330 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:52.330 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:53.707 Cleaning 00:33:53.708 Removing: /var/run/dpdk/spdk0/config 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:53.708 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:53.708 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:53.708 Removing: /var/run/dpdk/spdk1/config 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:53.708 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:53.708 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:53.708 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:53.708 Removing: /var/run/dpdk/spdk2/config 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:53.708 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:53.708 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:53.708 Removing: /var/run/dpdk/spdk3/config 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:53.708 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:53.708 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:53.708 Removing: /var/run/dpdk/spdk4/config 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:53.708 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:53.708 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:53.708 Removing: /dev/shm/bdev_svc_trace.1 00:33:53.708 Removing: /dev/shm/nvmf_trace.0 00:33:53.708 Removing: /dev/shm/spdk_tgt_trace.pid1761789 00:33:53.708 Removing: /var/run/dpdk/spdk0 00:33:53.708 Removing: /var/run/dpdk/spdk1 00:33:53.708 Removing: /var/run/dpdk/spdk2 00:33:53.708 Removing: /var/run/dpdk/spdk3 00:33:53.708 Removing: /var/run/dpdk/spdk4 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1760073 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1760826 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1761789 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1762274 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1762966 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1763108 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1763840 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1763850 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1764106 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1765336 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1766265 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1766539 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1766736 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1766949 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1767158 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1767433 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1767600 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1767793 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1768377 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1770742 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1770915 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1771083 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1771092 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1771573 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1771600 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1772083 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1772086 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1772377 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1772392 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1772816 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1773051 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1773587 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1773748 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774024 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774245 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774288 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774487 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774658 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1774821 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1775096 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1775270 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1775430 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1775707 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1775883 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1776051 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1776322 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1776488 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1776691 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1776929 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1777102 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1777310 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1777542 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1777704 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1777978 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1778154 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1778324 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1778604 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1778684 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1778904 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1781102 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1835151 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1837657 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1843521 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1846696 00:33:53.708 Removing: /var/run/dpdk/spdk_pid1849051 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1849572 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1856726 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1856728 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1857385 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1858036 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1858579 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1859103 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1859105 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1859292 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1859379 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1859386 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1860038 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1860694 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1861330 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1861699 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1861757 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1861899 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1862897 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1864133 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1869509 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1869775 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1872299 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1876013 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1878067 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1884337 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1889527 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1890836 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1891495 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1902206 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1904424 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1907208 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1908390 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1909591 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1909727 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1909866 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1909913 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1910310 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1911626 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1912229 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1912661 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1914270 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1914575 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1915134 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1917655 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1920924 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1924455 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1948006 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1950765 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1954417 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1955489 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1956576 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1959743 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1962004 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1966229 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1966231 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1969006 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1969147 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1969364 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1969660 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1969674 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1970748 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1971925 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1973104 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1974280 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1975581 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1976756 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1980311 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1980762 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1981776 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1982370 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1985833 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1987803 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1991721 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1995035 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1999493 00:33:53.969 Removing: /var/run/dpdk/spdk_pid1999497 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2011565 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2011972 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2012376 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2012803 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2013371 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2013896 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2014299 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2014710 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2017216 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2017353 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2021158 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2021332 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2022935 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2028486 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2028606 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2031404 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2032800 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2034321 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2035061 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2036466 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2037355 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2042724 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2043029 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2043417 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2044970 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2045249 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2045656 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2048103 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2048107 00:33:53.969 Removing: /var/run/dpdk/spdk_pid2049519 00:33:53.969 Clean 00:33:54.228 05:29:31 -- common/autotest_common.sh@1437 -- # return 0 00:33:54.228 05:29:31 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:54.228 05:29:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:54.228 05:29:31 -- common/autotest_common.sh@10 -- # set +x 00:33:54.228 05:29:31 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:54.228 05:29:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:54.228 05:29:31 -- common/autotest_common.sh@10 -- # set +x 00:33:54.228 05:29:31 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:54.228 05:29:31 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:54.228 05:29:31 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:54.228 05:29:31 -- spdk/autotest.sh@389 -- # hash lcov 00:33:54.228 05:29:31 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:54.228 05:29:31 -- spdk/autotest.sh@391 -- # hostname 00:33:54.228 05:29:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:54.487 geninfo: WARNING: invalid characters removed from testname! 00:34:26.570 05:29:58 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:26.570 05:30:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:28.478 05:30:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:31.770 05:30:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:34.321 05:30:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:37.615 05:30:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:40.155 05:30:17 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:40.155 05:30:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.155 05:30:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:40.155 05:30:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.155 05:30:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.155 05:30:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.155 05:30:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.156 05:30:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.156 05:30:17 -- paths/export.sh@5 -- $ export PATH 00:34:40.156 05:30:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.156 05:30:17 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:40.156 05:30:17 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:40.156 05:30:17 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713929417.XXXXXX 00:34:40.156 05:30:17 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713929417.LYIXDR 00:34:40.156 05:30:17 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:40.156 05:30:17 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:34:40.156 05:30:17 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:34:40.156 05:30:17 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:34:40.156 05:30:17 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:40.156 05:30:17 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:40.156 05:30:17 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:40.156 05:30:17 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:34:40.156 05:30:17 -- common/autotest_common.sh@10 -- $ set +x 00:34:40.156 05:30:17 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:34:40.156 05:30:17 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:34:40.156 05:30:17 -- pm/common@17 -- $ local monitor 00:34:40.156 05:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:40.156 05:30:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2060328 00:34:40.156 05:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:40.156 05:30:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2060330 00:34:40.156 05:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:40.156 05:30:17 -- pm/common@21 -- $ date +%s 00:34:40.156 05:30:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2060332 00:34:40.156 05:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:40.156 05:30:17 -- pm/common@21 -- $ date +%s 00:34:40.156 05:30:17 -- pm/common@21 -- $ date +%s 00:34:40.156 05:30:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2060335 00:34:40.156 05:30:17 -- pm/common@26 -- $ sleep 1 00:34:40.156 05:30:17 -- pm/common@21 -- $ date +%s 00:34:40.156 05:30:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713929417 00:34:40.156 05:30:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713929417 00:34:40.156 05:30:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713929417 00:34:40.156 05:30:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713929417 00:34:40.156 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713929417_collect-bmc-pm.bmc.pm.log 00:34:40.156 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713929417_collect-vmstat.pm.log 00:34:40.156 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713929417_collect-cpu-load.pm.log 00:34:40.156 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713929417_collect-cpu-temp.pm.log 00:34:41.096 05:30:18 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:34:41.096 05:30:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:41.096 05:30:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.096 05:30:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:41.096 05:30:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:41.096 05:30:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:41.096 05:30:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:41.096 05:30:18 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:41.096 05:30:18 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:41.096 05:30:18 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:41.096 05:30:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:41.096 05:30:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:41.096 05:30:18 -- pm/common@30 -- $ signal_monitor_resources TERM 00:34:41.096 05:30:18 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:34:41.096 05:30:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:41.096 05:30:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:41.096 05:30:18 -- pm/common@45 -- $ pid=2060345 00:34:41.096 05:30:18 -- pm/common@52 -- $ sudo kill -TERM 2060345 00:34:41.096 05:30:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:41.096 05:30:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:41.096 05:30:18 -- pm/common@45 -- $ pid=2060344 00:34:41.096 05:30:18 -- pm/common@52 -- $ sudo kill -TERM 2060344 00:34:41.096 05:30:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:41.096 05:30:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:41.096 05:30:18 -- pm/common@45 -- $ pid=2060346 00:34:41.096 05:30:18 -- pm/common@52 -- $ sudo kill -TERM 2060346 00:34:41.096 05:30:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:41.096 05:30:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:41.096 05:30:18 -- pm/common@45 -- $ pid=2060343 00:34:41.096 05:30:18 -- pm/common@52 -- $ sudo kill -TERM 2060343 00:34:41.096 + [[ -n 1655089 ]] 00:34:41.096 + sudo kill 1655089 00:34:41.105 [Pipeline] } 00:34:41.125 [Pipeline] // stage 00:34:41.131 [Pipeline] } 00:34:41.150 [Pipeline] // timeout 00:34:41.154 [Pipeline] } 00:34:41.173 [Pipeline] // catchError 00:34:41.178 [Pipeline] } 00:34:41.197 [Pipeline] // wrap 00:34:41.202 [Pipeline] } 00:34:41.217 [Pipeline] // catchError 00:34:41.226 [Pipeline] stage 00:34:41.228 [Pipeline] { (Epilogue) 00:34:41.242 [Pipeline] catchError 00:34:41.243 [Pipeline] { 00:34:41.258 [Pipeline] echo 00:34:41.259 Cleanup processes 00:34:41.265 [Pipeline] sh 00:34:41.553 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.553 2060472 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:41.553 2060609 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.579 [Pipeline] sh 00:34:41.861 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.861 ++ grep -v 'sudo pgrep' 00:34:41.861 ++ awk '{print $1}' 00:34:41.861 + sudo kill -9 2060472 00:34:41.873 [Pipeline] sh 00:34:42.154 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:50.317 [Pipeline] sh 00:34:50.603 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:50.603 Artifacts sizes are good 00:34:50.617 [Pipeline] archiveArtifacts 00:34:50.623 Archiving artifacts 00:34:50.842 [Pipeline] sh 00:34:51.125 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:51.143 [Pipeline] cleanWs 00:34:51.155 [WS-CLEANUP] Deleting project workspace... 00:34:51.155 [WS-CLEANUP] Deferred wipeout is used... 00:34:51.161 [WS-CLEANUP] done 00:34:51.165 [Pipeline] } 00:34:51.190 [Pipeline] // catchError 00:34:51.204 [Pipeline] sh 00:34:51.483 + logger -p user.info -t JENKINS-CI 00:34:51.490 [Pipeline] } 00:34:51.506 [Pipeline] // stage 00:34:51.512 [Pipeline] } 00:34:51.533 [Pipeline] // node 00:34:51.540 [Pipeline] End of Pipeline 00:34:51.596 Finished: SUCCESS